Large Deviations for Processes in Random Environments with Jumps

A deterministic walk in a random environment can be understood as a general random process with finite-range dependence that starts repeating a loop once it reaches a site it has visited before. Such process lacks the Markov property. We study the exponential decay of the probabilities that the walk will reach sites located far away from the origin. We also study a similar problem for the continuous analogue: the process that is a solution to an ODE with random coefficients. In this second model the environment also has"teleports"which are the regions from where the process can make discontinuous jumps.


INTRODUCTION
A deterministic process in a random environment is a solution to the differential equation dX t dt = b(X t , ω), X 0 = 0, where the vector field b : R d × Ω → R d is defined on a probability space (Ω, P). We are interested in the large deviation properties of the solution X t . The process is called deterministic, because, once the environment is sampled from the probability space Ω, the entire process depends only on the initial position X 0 .
We will consider the modified version of the previous process, in which we allow for certain randomness in time. The environment is located in the Poisson point process, and small neighborhoods of the Poisson points serve as teleports. Once the particle spends sufficient time within a teleport, it is subject to a jump. However, whether the jump will occur or not is going to depend on additional time-dependent random sequence.
Under some additional assumptions on b and the environment, we will establish the following result about the moment generating functions of X t . Here E denotes the expected value with respect to all randomness. This is known as an annealed or averaged process.
To get an idea on how to approach the problem we first consider a discrete analogue -the model of a deterministic walk in a random environment. We look at a random sequence X n defined recursively as X n+1 − X n = b(X n , ω), for suitable function b : Z d × Ω → Z d . Our aim is to establish the following theorem. Theorem 1.2. Let (η z ) z∈Z d be a stationary Z d -valued random field that satisfy the assumptions (i)-(iii) (we refer to the assumptions from the section 2). The random variable X n is defined as X 0 = 0, X n+1 = X n + η X n . Then there exists a convex function Λ : R d → R such that lim n→∞ 1 n log E e λ ·X n = Λ(λ ).
Notice that we may assume that the field b(x, ω) is of the form b(x, ω) = η x , where η is a random field on Z d . The field η itself could be understood as the random environment in which the walk occurs. When the particular realization of the environment is fixed, the walk becomes deterministic. 1 One of the main characteristics of this walk is that once the loop occurs, the walk will start repeating the loop forever.
In order to make some connections of this model with the one of the random walk in a random environment, we start from the representation of X n as X n+1 − X n = b(X n , ω, π). Here b : Z d × Ω × Π → Z d is a random variable, and (Ω, P), (Π, P) are probability spaces. In this model, (Ω, P) is the environment, and for each fixed ω ∈ Ω, the walk X n could be understood as a random walk on probability space (Π, P). Recent works [13] and [5] have established the quenched and annealed large deviations for the random walk in a random environment under certain non-degeneracy assumptions on (Π, P). The articles [9] and [14] found variational formulas for the rate functions and some connections between the two rate functions. The model we are studying is related to the annealed (averaged) case studied in the mentioned papers. Here, the probability space (Π, P) is very degenerate.
The main idea for the proof is the adaptation of the subadditive argument. However, we need some work to find a quantity that happens to be subadditive. This quantity is found in section 4 where we prove that it has the same exponential rate of decay as our large deviation probabilities. After that we are ready for the proofs of the main theorems which are presented in the section 5. Section 6 contains an easy consequence, i.e. the law of large numbers. The law of large numbers with limiting velocity 0 for the deterministic walk in random environment is not a surprising fact. It is intuitive that the loop will occur, the expected time of the occurrence of the loop is finite, hence the walk can't go too far.
We are assuming that the environment has finite-range dependence. The special case is the iid environment where many of the presented arguments can be simplified. We are not assuming that the walk is nearest-neighbor. We do assume however, certain ellipticity conditions on the environment: There is a nice set of vectors such that using only these vectors the walk can move in any direction. Our ellipticity condition is the assumption that the probability that the walk at any position of the environment takes any particular vector from the nice set is uniformly positive.
In the next section we state the assumptions we impose to the model and state the theorem 2.1 that will be the main focus of our study. In order to prove it we will need some deterministic lemmas that establish the uniform bounds on the probabilities that the walk can go virtually in any direction. These statements are stated and proved in section 3. In the end we discuss the generalization of this approach to the continuous setting.
Many of the arguments here involve the consideration of the position of the particle at the time of the first self-intersection. Other interesting results regarding self-intersection times of random walks can be found in [1] and [2]. The hitting times and back-tracking times were a useful tool in establishing the laws of large numbers of for ballistic random diffusions (see [10]).

DEFINITIONS AND ASSUMPTIONS
Let (η z ) z∈Z d be a stationary Z d -valued random field that satisfies the following conditions: (i) There exists a positive real number L such that |η z | ≤ L for all z ∈ Z d . (ii) There exists a real number M such that η z is independent of the environment outside of the ball with center z and radius M. (iii) There exist a nice set of vectors {u 1 , . . . , u m } ∈ Z d and a constant c > 0 such that P(η z = u i |F z ) > c for all i ∈ {1, 2, . . . , m}, where F z is a sigma-algebra generated by all η w for w ∈ Z d such that 0 < |w − z| ≤ M.
The last assumption implies the existence of a loop in any half-space with a positive probability (see theorem 3.1). It also implies that there exists a constant c > 0 such that P(η z · l > 0|F z ) > c for every z. A special case is the iid environment when in condition (ii) we require M < 1. The condition (iii) is then replaced by P(η z = u i ) > c for all i. Many of the arguments would become simpler and/or less technical if we assumed that the environment is iid.
The random variable X n is defined recursively as X 0 = 0 and: X n+1 = X n + η X n . We will use the following equivalent interpretation of X n . The process X n behaves like a random walk until the first self intersection. The increments of the random walk are sampled at every step according to the law of the random field η z . After the self intersection occurs, the walk becomes deterministic and repeats the loop.
Let θ ≤ τ be the smallest integer such that Y τ = Y θ . We define the walk X n using the formula: Let l ∈ R d be a unit vector. Define T l m = inf{n : X n · l ≥ m}. Denote by Z l x the hyperplane through x ∈ Z d orthogonal to the vector l. Let us further denote by H l x the half-space through x determined by the vector l as Our goal is to prove the large deviations for X n (see the theorem 5.2). We will be able to use the Ellis-Gartner theorem to get some further bounds once we establish the following result: Theorem 2.1. Let X n be the random walk defined as above. Assume that the random environment satisfies the conditions (i)-(iii). For each unit vector l ∈ R d there exists a concave function φ l : R + →R such that for all k ∈ R + : Remark. Notice that φ l (k) = φ tl (tk) for all t ∈ R + . Therefore φ l (k) = Φ 1 k l for a suitable function Φ : R d →R.

EXISTENCE OF A LOOP
In this section we prove that the previously defined random walk will have a loop in each halfspace with a positive probability. This fact will be a consequence of the following elementary lemma. The lemma states that there exists a loop consisting entirely of vectors from a nice set. Lemma 3.1. Let {u 1 , . . . , u m } be a nice set of non-zero vectors. There exist non-negative integers q 1 , q 2 , . . . , q m not all equal to 0 such that q 1 u 1 + · · · + q m u m = 0.
Proof. We will prove the statement using the induction on the dimension d. The statement is easy to prove for d = 1 and d = 2. We may assume that {u 1 , . . . , u m } is a minimal nice set, i.e. there is no proper nice subset of {u 1 , . . . , u m }. If not, take the proper nice subset and repeat the argument. Let us fix the vector u m , and let v i = u i − u i ·u m |u m | 2 u m for i = 1, . . . , m − 1. All vectors v 1 , . . . , v m−1 have rational coordinates. Let r be the common denominator of those fractions and consider the lattice D of size 1/r in the vector space determined by the span W of v 1 , . . . , v m−1 . Let us prove that the set {v 1 , . . . , v m−1 } is nice in W . Letl ∈ W be a vector with real coordinates. There exists i ∈ {1, 2, . . ., m} such that u i ·l > 0. Sincel ∈ W we immediately have that u m ·l = 0 and u i ·l = v i ·l hence v i ·l > 0. This implies that {v 1 , . . . , v m−1 } is a nice set of vectors in W . According to the induction hypothesis there are non-negative integers q ′ 1 , . . . , Assume the contrary, that this number were greater than 0. Since {u 1 , . . . , u m−1 } is not a nice set (due to our minimality assumption for {u 1 , . . . , u m }) there exists a vector l ∈ R d such that l · u m > 0 but l · u k ≤ 0 for each k ∈ {1, 2, . . . , m − 1}. This gives that to obtain q 1 u 1 + · · · + q m u m = 0. This completes the proof.
The following theorem says that in each half-space H l x a walk X n starting from x can have a loop in H l x with a probability that is strictly greater than 0. Because of the stationarity it suffices to prove this for half-spaces through the origin. Theorem 3.1. There exist constants m ∈ N, c 1 ∈ R + such that: For each unit vector l ∈ R d there exist an integer s ≤ m and a sequence x 0 = 0, x 1 , x 2 , . . . , x s ∈ Z d such that: , and (iv) For y i = x 0 + · · · + x i−1 , let us denote by F y 1 ,...,y s the σ -algebra generated by all random variables η z for z ∈ Z d \ {y 1 , . . . , y s } such that min 1≤i≤s |z − y i | ≤ M. Then: Proof. Let us prove that there exist vectors x 1 , . . . , x s ∈ {u 1 , . . . , u m } for which (i)-(iii) are satisfied. Then (iv) will be satisfied as well. To see that let us denote by G y 1 ,...,y s−1 y s the sigma algebra generated by all random variables η z for z ∈ Z d \ {y s } such that min 1≤i≤s |z − y i | ≤ M.
Using the previous two lemmas we can establish the equality analogous to the one from the theorem 2.1 in which k = 0.

Theorem 3.2.
For each vector l ∈ R d and X n defined as before the following equality holds: Proof. The inequality P(X n · l ≥ 0) ≤ 1 implies that lim sup 1 n log P(X n · l ≥ 0) ≤ 0. For the other inequality we use the theorem 3.1. Let x 1 , . . . , x s be the sequence whose existence is claimed by that theorem. Let y i = x 0 + · · · + x i−1 as before. Notice that We will also need the following deterministic lemma.
Proof. First notice that ρ(l) > 0 for each l ∈ R d \ {0}. Otherwise the set {u 1 , . . . , u m } would not be nice. Notice also that ρ is a continuous function (because it is a maximum of m continuous functions) and the unit sphere is a compact set. Thus the infimum of ρ over the unit sphere must be attained at some point, and we have just proved that value of ρ at any single point is not 0.

HITTING TIMES OF HYPERPLANES
The main idea for the proof of the theorem 2.1 is to establish the asymptotic equivalence of 1 n log P(X n · l ≥ nk) and a sequence to which we can apply the deterministic superadditive lemmas. First we will prove that the previous sequence behaves like 1 n log P(T l nk ≤ n). Then we will see that the asymptotic behavior of the latter sequence satisfies where D l 1 is the first time the walk returns over the hyperplane Z l 0 . This probability captures those walks that don't backtrack over the hyperplane Z l 0 . We will be able to prove the existence of the limit of the last sequence using a modification of the standard subadditive lemma that states that lim a n n = inf a n n if a n+m ≤ a n + a m for all m, n ∈ N. From now on let us fix the vector l ∈ R d and let us omit the superscript l in the variables. Also, some of the new variables that will be defined would need to have a superscript l but we will omit it as well.
Our first result in carrying out the formerly described plan is the following lemma: The following inequality holds: In addition, for each ε > 0 we have: This establishes the first inequality. Let x 0 , . . . , x s be the sequence whose existence follows from the theorem 3.1. We now have Here F T kn denotes the σ -algebra defined by η z for z ∈ Z d such that |z − X i | ≤ M for i = 1, 2, . . . , T kn . The equality in (2) holds because if T kn ≤ n − s and X T kn +1 − X T kn = x 1 , . . . , X T kn +s − X T kn +s−1 = x s , then the walk will enter in a loop. This loop will be in the half-space H kn which would guarantee that X n · l > kn. For and consequently the first set has the smaller probability than the second. This completes the proof of the lemma.
Remark. In the same way we could obtain the analogous inequalities with the walk X n replaced by X n∧τ .
For each integer i ≥ 1 denote by D i the time of the ith jump over the hyperplane Z 0 in the direction of the vector l. Define D 0 = 0 and we allow for D i to be ∞.

Lemma 4.2.
Let k and k ′ be two real numbers such that 0 < k ′ < k. Then the following two inequalities hold: Proof. We have: We will prove that each term from the right-hand side of the previous equality is bounded above by . Let Z ′ 0 be the set of all points z ∈ Z d such that z · l > 0 and the distance between z and Z 0 is at most L. We have: Here F D i ,T u denotes the σ -algebra generated by the random environment that is contained in the M-neighborhood of the walk from D i to T u . When conditioning on this σ -field we essentially understand our environment in the following way: It consists of two walks: One deterministic that goes from z to Z u without crossing the hyperplane Z 0 , and another walk that starts at 0 and ends in z by making exactly i crossings over Z 0 , not intersecting the other deterministic walk, and not crossing over Z u . Therefore: whereT u is defined in analogous way as T u to correspond to the new walkX j = X D i + j . In the last equationD 1 is defined as the first time of crossing over the hyperplane Γ parallel to Z 0 that is shifted by the vector −L l |l| . Let us now prove that P( . Denote by J the closest time the walk comes to the hyperplane Γ. The number of possible positions of the walk is at most L d n d−1 and similarly as above, conditioning on the σ -filed between On the other hand, it is obvious that

LARGE DEVIATIONS ESTIMATES
Now we are ready to prove the main theorem: Proof of the Theorem 2.1. We will prove that for each unit vector l there exists a concave function ψ : R + →R such that Because of the Lemma 4.2 it suffices to prove that there exists a concave function γ : R + →R such that Let w ∈ Z d be a vector such that w · l > 0 and P(η z = w|F z ) ≥ c > 0 for some constant c. Assume that r is an integer such that the distance between the hyperplanes Z rw and Z 0 is at least M. If u, v, p, q are any four positive real numbers such that q > r, then the following inequality holds: If the environment were iid this could have been done by conditioning on F u . In our situation the idea is the same, we just need some more work to compensate for the lack of independence.
Let us introduce the following notation:X i = X T u +r−1+i ,D 1 the first timeX i jumps over Z X Tu +rw , and Let us explain why one can bound E[1(ξ T u = w) · · · 1(ξ T u +r−1 = w)|F [X Tu ,X Tu+rw·l ] ] below by c. The quantity in question is a random variable, and we are only considering that random variable on the set On that set the random walk doesn't visit the sites X T u , . . . , X T u +r−1 hence we can use our ellipticity assumption. This establishes (5).
Applying the inequality (5) to the numbers u = nk, v = mk, p = n, q = m yields the following relation where k ′ is any real number greater than k for which (m − r)k ′ ≥ mk. For now on we will write c instead of log c. In other words, for each k, and each k ′ > k we have for all m, n such that n ≥ rk ′ k ′ −k . Let n .
If k < k ′ then for each α < δ (k ′ ) there exists a sequence n t that goes to infinity such that δ (n t ,k ′ ) n t ≥ α. For each fixed n t and each n ≥ n t there exist integers a ≥ 0 and b ∈ {0, 1, 2, . . ., n t + r − 1} such that n = a(n t + r) + b. Therefore δ (n, k) For each µ > 0 there exists t 0 such that for all t > t 0 we have uniformly in a and b. There exists n 0 such that for all n > n 0 , the number a = n n t +r would be large enough to guarantee A consequence of the previous inequality is the monotonicity of the functions δ and δ . They are both non-increasing. Let α and β be two positive rational numbers such that α + β = 1. Let k 1 and k 2 be any two positive real numbers. According to (5) we know that for each n ∈ N we have: δ (n, αk 1 + β k 2 ) = log P(T n(αk 1 +β k 2 ) ≤ αn + β n, T n(αk 1 +β k 2 ) ≤ D 1 ) ≥ log P(T nαk 1 ≤ αn, T nαk 1 ≤ D 1 ) + log P(T nβ k 2 ≤ β n − r, T nβ k 2 ≤ D 1 ) + c ≥ δ (αn, k 1 ) + δ (β n, k ′ 2 ) + c, for sufficiently large n, where k ′ 2 is any number larger than k 2 . This implies that Let us justify the second inequality. The previous lim sup is definitely larger than the lim inf over the sequence of those integers n that are divisible by the denominators of both α and β .
Taking the limit of both sides as n → ∞ and using the monotonicity of δ we get: This inequality together with δ (k + ε/2) > δ (k + ε) implies that δ is right-continuous.
Using the lemma 4.1 we get that lim sup for all ε > 0. If k belongs to the interior of ψ −1 (R), we can take ε → 0 in the previous inequality and use the continuity of ψ to obtain lim inf n→∞ 1 n log P(X n · l ≥ nk) ≥ ψ(k).
This in turn implies (1) and the concavity of φ .
The Gartner-Ellis theorem will enable us to get some more information on lower and upper bound large deviations for general sets. Definition 5.1. Assume that Λ * is a convex conjugate of the function Λ. y ∈ R d is an exposed point of Λ * if for some λ ∈ R d and all x = y, (8) is called an exposing hyperplane.
We are now ready to prove the theorem stated in the introduction.
Proof. As noted in the remark after the theorem 2.1 there exists a function Φ : R d →R such that for all l ∈ R d and k ∈ R + : For each λ ∈ R d and each k > 0 we have that lim inf 1 n log E e X n ·λ ≥ lim inf 1 n log E e X n ·λ · 1(X n · λ > kn) Moreover, from the theorem 3.2 we get Therefore lim inf 1 n log E e X n ·λ ≥ max 0, sup From the boundedness of jumps of the random walk X n we have that |X n · λ | < L|λ |. Let r ∈ N and 0 = k 0 < k 1 < k 2 < · · · < k r = L|λ |. Then lim 1 n log E e X n ·λ = lim 1 n log E (e X n ·λ · 1(X n · λ ≤ 0) + The last equality is true because r ∈ N is a fixed number as n → ∞ and the lemma 3.2 implies that lim n→∞ 1 n log P(X n · (−λ ) ≥ 0) = 0. The theorem 2.1 implies that the function Φ 1 k λ is continuous in k hence taking r → ∞ and k i+1 − k i constant we get: lim sup 1 n log E e X n ·λ ≤ max 0, sup This proves the existence of the limit in from the statement of the theorem with We will not use this representation for Λ to prove its convexity. Notice that all functions Λ n (λ ) = log E e X n ·λ are convex when n is fixed. Indeed, for all α, β ∈ R + with α + β = 1 and all λ , µ ∈ R d according to the Holder's inequality we have: e Λ n (αλ +β µ) = E e X n ·λ α · e X n ·µ β ≤ E e X n ·λ α · E e X n ·µ β = e αΛ n (λ )+β Λ n (µ) .
Since the limit of convex functions is convex, as well as the maximum of two convex functions, we are able to conclude that Λ is convex. Obviously, the origin belongs to the interior of the set {λ ∈ R d : Λ(λ ) < +∞} because Λ is bounded.

Theorem 5.2. Let X n be the previously defined deterministic walk in a random environment that satisfies the conditions (i)-(iii)
. Let Λ be the function from the theorem 5.1 and let Λ * be its convex conjugate. Let F be the set of exposed points of Λ * whose exposing hyperplane belongs to the interior of the set {λ ∈ R d : Λ(λ ) < +∞}. For any closed set F ⊆ R d ,

LAW OF LARGE NUMBERS
Let us end with a note about the law of large numbers for this deterministic walk in a random environment. It is not surprising that the walk will have 0 limiting velocity because it is expected that the walk will eventually end in a loop. Proof. It suffices to prove that lim n→∞ 1 n E(X n · l) = 0 for each l ∈ R d , because the zero vector is the only one orthogonal to the entire R d . Furthermore, the problem can be reduced to proving that 1 n E[X n · l] + converges to 0 because X n · l = (X n · l) + + (X n · (−l)) + . By the Fubini's theorem we have Since {X n · l > nt} = / 0 for t > L the previous integration could be performed on the interval (0, L) only. Let x 1 , . . . , x s be a sequence from theorem 3.1, and let y k = ∑ k i=1 x i . Define the random walk Y i as Y i = X s+i . The probability that the walk will reach the half-space H l nt before time n is smaller than the probability of the following event: The walk does not make a loop in first s steps, and after that it reaches the half-space H l nt−sL . Therefore we deduce that for each t ∈ (0, L) the following inequality holds: The previous inequality now implies that From theorem 3.1 we have that E 1((X 1 , . . . , X s ) = (y 1 , . . . , y s ))|F Y 1 ,...,Y n−s ≤ 1 − c for some constant c > 0. Let us denote g = 1 − c. We know that g ∈ (0, 1). Using mathematical induction, we can repeat the previous sequence of inequalities [nt/sL] times to obtain that P(X n · l ≥ nt) ≤ g [nt/sL] . Now we have that for all t 0 > 0 the following inequality holds: If we keep t 0 fixed and let n → ∞ it is easy to see that the last quantity converges to 0. Therefore lim sup 1 n E(X n · l) + ≤ t 0 . However, this holds for every t 0 > 0 hence lim sup 1 n E(X n · l) + ≤ 0. This finishes the proof of the theorem.

PROCESSES IN RANDOM ENVIRONMENTS WITH TELEPORTS
Our aim is to study the continuous time process X t on (Ω, P) that solves the following ODE: As before, in order to prove the theorem 1.1 we are looking at the probabilities P(X t · l > kt) for fixed l ∈ R d and fixed k > 0.
Let us first outline the main difficulties we have in trying to implement the proof of 2.1 to the continuous setting. The proof of the lemma 4.2 used the fact that by time n, the walk could jump only n times over the hyperplane through 0. The continuous process could jump infinitely many times over that hyperplane and those jumps could happen in relatively short time. Instead, we will look at a strip around 0 of positive width. By strip we mean the region of the space between two hyperplanes orthogonal to l. The process can't travel over the strip infinitely many times, because the speed is finite. Thus we will require our process not to backtrack over the hyperplane Z −w for suitable w > 0. Since this is not enough to separate the process and prove the inequality (5) our goal is to prove that the probability of the event that the process doesn't backtrack over the hyperplane Z −w is comparable to the probability that the process doesn't backtrack over Z 0 . The difficulty here comes from the fact that the process can approach Z 0 with slow speed and it can in some way introduce a lot of dependence in the environment. This can happen especially if there is a lot of continuity of the process b. We introduce additional assumptions to have more randomness in the definition of X t and that randomness helps the process to escape from such environments. So, we will have some assumptions that continuous processes can't satisfy. It turns out that the bounded process won't satisfy some of the requirements, either.
We will consider a Poisson point process independent of the environment. We will assume that the balls of radius r centered at the points of the process serve as "teleports," and our process evolves according to (9) until it spends a fixed positive time in some of the teleports. Once that happens, the process will reappear at another location. In some sense, these teleports correspond to the locations where b is infinite.
We will also assume that the vector field b has an option of discontinuous change on the lines of the grid Z d .
The existence of teleports and discontinuities will help us build tunnels that can take the process in the directions we wish it to travel. Also we will be able to build traps that can hold the process for long time. It is quite possible (and believable) that the process will have the traps, tunnels, and means of escaping other than teleports, but proving such statements turned out to be difficult. Often some other assumptions are necessary to made and then one has to deal with tedious work with conditional distributions and some types of adaptations of Brownian bridges to this deterministic setting.
In order to use the arguments based on subadditivity we need the stationarity of the underlying random field. Our assumptions are going to make the grid Z d special, so we can't hope for general stationarity. We will assume only that P(A) and P(τ z A) have same distributions for z ∈ Z d . This is going to be sufficient for our purposes, because we will assume that after each teleportation the process will appear at the point that experiences the same distribution as the initial point.
The initial position of the process is assumed to be chosen uniformly inside the unit cube [0, 1] d . The random choice of the point is assumed to be independent from the rest of the environment.
For each unit cube Q of the lattice, denote by F Q,0 the σ -algebra generated by the environment in the complement of Q.
We require the following assumptions: (i) There is δ 0 > 0 and a positive real number c such that for each lattice cube Q of edge length 1 and each vector l ∈ R d we have: (ii) The vector field b is bounded and it has finite range dependence, i.e., there are positive constants L and M such that |b(ω)| ≤ L for all ω ∈ Ω, and b(τ x ω) is independent of the σ -field generated by the environment outside of the ball B(x, M).
(iii) Fix r ∈ 0, 1 4 √ d and t 0 > 0 such that t 0 < r 4L . Fix also c 3 ∈ (0, 1), λ 0 > 0, and a sequence Y n of iid random variables with values in {φ } ∪ [0, 1] d . Each Y n has a probability c 3 of being φ , or (with probability 1 − c 3 ) it is uniformly distributed in [0, 1] d . There exist a set of vectors {u 1 , . . . , u m } ∈ R d such that for each l ∈ R d there is i ∈ {1, 2, . . . , m} that satisfies Here B r denotes the ball of radius r. For each i, we consider a Poisson point process of intensity λ 0 , and each point x of that process will form a (u i ,t 0 )-teleport. This means that if the process X t spent the entire time interval (t − t 0 ,t) in the ball B r (x), then at time t the process X t will either stay at the same place if Y ⌊t/t 0 ⌋ = φ or reappear in the lattice cube closest to X t + u i at the relative position Y ⌊t/t 0 ⌋ within the cube.
Notice that by fixing the time t 0 we make sure that the average speed of X t remains bounded. Although, there is a teleportation involved, the particle has to wait to be teleported and the distance it can go is bounded.
There could be regions belonging to more than one teleport. If the particle is a subject to two teleportation, then the jump will be suppressed. A particle could be subject to more than one teleportation if it enters the teleport by jump. Entering two teleports simultaneously by the means of diffusion is a zero-probability event.
The requirement t 0 < L 4r guarantees that if the process comes to within r/2 to the Poisson point, it is going to be teleported for sure, because it won't have enough time to escape.
For a fixed vector l ∈ R d \ {0} we consider the moment-generating function E(exp(λ X t )) and we want to prove that there is a convex function Λ such that We recall the definition of the hitting times of the hyperplanes T p for p ∈ R + from the section 4. In this continuous case we have an analogous result to the lemma 4.1.

Lemma 7.1. The following inequality holds:
lim sup In addition, for each ε > 0 we have: Proof. The first inequality follows immediately as in the discrete case. For the second one we have to modify the argument a bit. We are not able to construct a loop as we did in the discrete case. The reason is that the curve X t will not have a self intersections in some cases (an example is when b is a gradient of a function). Denote by S ⊆ R d the set of points whose each coordinate belongs to {−1, 0, 1}. Denote the points of S by P 0 , P 1 , P 2 , . . . , P 3 d −1 . Assume that P 0 coincides with the origin O. Let C i (0 ≤ i ≤ 3 d − 1) be the cube with center P i and side length 1. For each integer i ∈ {1, 2, . . . , 3 d − 1}, consider the event where D ′ is the event that there are no Poisson points in the r-neighborhood of C 0 . Denote by F C 1 ,...,C 3 d −1 the sigma algebra generated by the environment in According to our assumptions, we have P D | F C 1 ,...C 3 d −1 > 0. Notice that if the process ever enters the cube C 0 , it will stay there forever. We will show that if X t 0 ∈ C 0 , then for each t ≥ t 0 we have X t ∈ C 0 . It suffices to prove that X t ∈ C i for each i ∈ {1, . . . , 3 d − 1}. Assume the contrary, that Then Y t < Y t 1 . Using the fundamental theorem of calculus we have t) on the event D i . Therefore, X t can't enter the interior of any of the cells C i , proving that the process X t is trapped in the cell C 0 . Now we can finish the proof in the similar way as in the discrete case. Let s > 0 and let S kt denote the "shard"-like surface consisting of faces of the grid of size 3 that is in front of the plane Z tk (here "in front of" means with respect to the direction l). LetT tk = T S tk . Denote by C the translation of i=0 C i with the following properties: C contains the point XT tk on one of its faces, and is on different side of S tk than the origin. Denote byD the event that the environment in C is as explained before. Denote byÊ the event that Y ⌊T tk /t 0 ⌋ = φ . Using conditioning we get Denote by R the right-hand side of the last inequality. LetF be the event that there exists i ∈ {1, . . . , m} and t 1 ∈ t 0 ⌊T tk t 0 ⌋,T tk such that X t spent all the time (t 1 − t 0 ,t 1 ) in a u i -teleport. Then we have Consider the first summand on the right hand side of the previous inequality. On the eventF we know that the random variable Y ⌊T tk /t 0 ⌋ will not be responsible for any further jumps. The process already spent a lot of time in a teleport, so the value of Y ⌊T tk /t 0 ⌋ was already used in making the decision whether there will be a jump or not. We don't care about the outcome, because the process reached the level S nk , but we know for sure that a single value of Y can't be responsible for two decisions about jumps. Hence the event is sufficient to assure that the path X t will enter the trap before having any possible jumps. There are only finitely many terms listed in this sequence, so the probability ofĜ is strictly positive. We also have that 1(X t · l ≥ kl) = 1 in the intersectionD ∩Ê ∩F ∩ {T tk ≤ t}. Hence there is a constant c ′ such that on the eventF we have E 1(D) | FT kt ≥ c ′ . Thus we can bound from below the first summand by c ′ · P(T kt ≤ t,F).
Let us now consider the second summand. Notice that LetĜ be the same set as above. The identity 1(X t · l ≥ kt) = 1 holds as before, because on the set E ∩Ĝ the process will enter the trap. The unfortunate thing is that we had to modify the past, by introducing the eventÊ. In the same way as in the case of the first summand of (10) we can bound from below the second expression from (11) by c ′ P(T kt ≤ t,Ê,F C ). On the event {T kt ≤ t,F C } we know that the process reached the level S kt by time t and it didn't spend sufficient time in a teleport to be considered for a jump in which Y ⌊T kt /t 0 ⌋ would play a deciding role. ThereforeÊ is independent of {T kt ≤ t,F C } hence This allows us to conclude that there is a constant c ′′ > 0 such that Taking the logarithm of both sides of the last inequality, dividing by t, and taking the lim inf as t → +∞ one obtains the following inequality lim inf For each ε, there exists t 0 such that every t > t 0 satisfies This completes the proof of the lemma.
Following the approach from the discrete case our goal is to establish a statement similar to the lemma 4.2.
Lemma 7.2. Let k and k ′ be two real numbers such that 0 < k ′ < k. Then the following two inequalities hold: Proof. The second inequality is obvious since {T kt ≤ t, T kt ≤ D 1 } ⊆ {T kt ≤ t}. Our proof of the first inequality from the lemma 4.2 used the fact that the number of crossings of the walk is finite. Obviously, we can't use that fact in the continuous setting. The idea is to break the process between its crossings of a strip between hyperplanes Z −w and Z 0 , where w is some fixed real number from the interval ( 1 2 , 1). Define the following stopping times: G 0 = 0, F 0 = inf{t : X t · l ≤ −w}. Having defined G i and F i , for i ≥ 0, we inductively define We will need the following lemma: Lemma 7.3. For any two real numbers k and k ′ satisfying 0 < k ′ < k we have Proof. There is at least one of the vectors from the assumption (iii), say u 1 , such that dist ((B r + u 1 ) · l, B r ·l) > 2. Let B be the event that for each z in the cells adjacent to the origin we have b(τ z ω)·l > 0 and that the origin is at a distance smaller than 1 2 r of a (u 1 ,t 0 ) teleport, and that the origin is at a distance at least r from any other teleport. The conditional probabilities of that event are bounded below by a constant. By the time t 0 the process will be away from the origin. Denote byX the process defined byX t = X t+t 0 . LetG i andF i denote the stopping times corresponding to the process X which are analogous to the stopping times G i and F i . LetB denote the process that the each point in the unit d − 1-dimensional ball of radius max i {|u i |} in the hyperplane Z −u i ·l is at a distance 1 4 r to a (u 1 ,t 0 ) teleport. The probability of that event is strictly positive and the event is independent on the σ -algebra generated by the environment in the positive l-direction of the hyperplane Z 0 . Therefore we have It remains to notice that for sufficiently large t we have that tk ′ ≤ (t − t 0 )k, hence This completes the proof of the lemma 7.3.
For any real number u > 0 we write the event {T u ≤ t} as the following union: The last union turns out to be finite, because we can prove that if T u ≤ t, then T u ≤ F ⌈t|l|⌉ . We first find a lower bound on F ⌈u⌉ . The fundamental theorem of calculus implies which together with |l| = 1 yields F i − G i ≥ 1 |l| . Using this inequality we obtain Therefore F ⌈tL/w⌉ ≥ t and on T u ≤ t we immediately get F ⌈tL/w⌉ ≤ T u . This implies that Let us prove that each term on the right hand side of the last inequality can be bounded by the quantity P(T u ≤ t, T u ≤ F 0 ). Denote by F G i+1 ,T u the sigma algebra generated by the environment contained in the M-neighborhood of the process from G i+1 to T u . Notice that if T u ≤ t and T u ≥ F i , then the process has made at least i trips over the region between the hyperplanes Z 0 and Z −w . Since T u ≤ F i+1 and T u ≤ t we conclude that the process has crossed the hyperplane Z u by time t which means that it had to cross the hyperplane Z 0 again. Therefore G i+1 ≤ t. LetX be the process starting at time G i+1 . More precisely, we defineX t = X G i+1 +t . We useF i andG i to denote the stopping times forX t analogous to F i and G i .
The last conditional expectation can be bounded above by 1. Although our process does not posses Markov property, we can use the trivial bounds on the indicator functions, namely 1(T u ≥ F i ) ≤ 1. Therefore Placing u = tk and using the lemma 7.3 we obtain lim sup 1 t log P(T tk ≤ t) ≤ lim for any two real numbers k ′ and k such that 0 < k ′ < k.
Proof of the Theorem 1.1. The proof will proceed in the same way as in the discrete case once we establish the inequality (5). Denote by Q the parallelepiped {⌈2L⌉} × [−5, 5] d−1 . Similarly to the proof of the lemma 7.1, denote by S u the surface consisting only of faces of the grid of size ⌈2L⌉ that is in front of Z u when looking from the origin in the direction l. Denote by Q u the appropriate isometric transformation of Q to be on the side of S u opposite to the origin such that Q u contains on one of its faces the hitting point of S u by the process X t . Denote by B 1 the event that after reaching the surface S u the process encounters the environment that is going to take it to through the parallelepiped Q u in time less than 4L/δ 0 .
Such an environment can be constructed by requiring that in the central cells of Q u satisfy b · l > δ 0 2 . In outer cells of Q u the environment acts as a trap and brings the process towards the middle cells. That way our parallelepiped acts as a tunnel through which the process must go. This reduces the portion of the environment that gets exposed to the process, and plays a role of the sequence of steps of size w that we used in proving (5). In the end of the tunnel Q u we require that the environment has a set of teleports in the direction u 1 , while before the end there are no teleports overlapping with the tube. The probability of such an environment Q u is still positive, and we are sure that after the passage through the tunnel the process will appear at uniform location within a cube. We may also assume that the furthest face of the parallelepiped Q u is "almost parallel" to the hyperplane Z u . The problem is that the vector l, and consequently the planes Z u , are at an angle with respect to grid, and we want our last teleports to be decently aligned, so that we are sure that they are not going to protrude to the environment after the jump. Let B 2 be the event that at any moment that is within t 0 of the time of hitting the surface S u , the process did not spend a time longer that t 0 within a teleport. The precise definition of B 2 is the same as the definition ofF in the proof of lemma 7.1.
Denote byT u the hitting time of S u . We have the following sequence of inequalities P(T u+v ≤p + q, T u+v ≤ D 1 ) ≥ P (T u+v ≤ p + q, T u+v ≤ D 1 , T u ≤ p) where F [a,b] is the σ -algebra determined by the environment outside of the strip [Z a , Z b ]. Let us introduce the following notation:X t = XT u +2L+t ,D 1 the first timeX t backtracks over Z XT u+2L , and T v = inf{t :X t · l ≥ v}. Let r = 2L δ 0 . We now have P(T u+v ≤ p + q,T u+v ≤ D 1 ) . Now we bound each of the two terms on the right hand side in the same way as we did in the proof of the lemma 7.1 to get We can now replaceX with a process on the independent environment because it is sufficiently far away from X and use the independence to obtain We can now proceed in the same way as in the case of the deterministic walk and establish the existence of the limit of the moment generating function. This completes the proof of 1.1.