On Covering Monotonic Paths with Simple Random Walk

In this paper we study the probability that a $d$ dimensional simple random walk (or the first $L$ steps of it) covers each point in a nearest neighbor path connecting 0 and the boundary of an $L_1$ ball. We show that among all such paths, the one that maximizes the covering probability is the monotonic increasing one that stays within distance 1 from the diagonal. As a result, we can obtain an exponential upper bound on the decaying rate of covering probability of any such path when $d\ge 4$.


Introduction
Cover times of graphs by a simple random walk is a well studied subject [8]. However there is not much literature on the basic question of subgraphs covering probabilities. Such questions are useful for geometric studies of random walk traces [9], entropic calculations such as those appearing in Wulff constructions [1] and percolation questions such as for random interlacements [2,12,14].
In this paper, we study the probability that a finite subset, especially the trace of a nearest neighbor path in Z d is completely covered by the trace of a d dimensional simple random walk.
For any finite subset A ⊂ Z d and a d dimensional simple random walk {X n } ∞ n=0 starting at 0, we say that A is completely covered by the first L steps of the random walk if A ⊆ Trace(X 0 , X 1 , · · · , X L ) := {x ∈ Z d : ∃0 ≤ i ≤ L, X i = x}.
For simplicity we state our first result for d = 2. For an integer l 0 ≥ 0 and the subspace of reflection l : x = y + l 0 , define ϕ l : Z 2 → Z 2 as the reflection mapping around l. I.e., for any (x, y) ∈ Z 2 , ϕ l (x, y) = (l 0 + y, x − l 0 ).
Suppose two disjoint finite sets A 0 , B 0 ⊂ Z 2 ∩ {(x, y) : x ≤ y + l 0 } both stay on the left of l. We then have the following theorem which states that the covering probability cannot get larger when we reflect one of them to the other side of the line: Theorem 1.1. For any integer L ≥ 0, . Remark 1.2. By taking the union over all the L's, one can immediately see the theorem also holds for L = ∞. Remark 1.3. One would think (like the authors first did) that Theorem 1.1 should follow from repeated use of the reflection principle. Two problems arise when one explores this idea. The first is that reflecting a path does not conserve the hitting order within the sets, which makes it hard to determine the times of reflection. The second is that even if we consider the sets before and after reflection with the same hitting order we can get a contradiction to the monotonicity of cover probabilities with the specified order. See Figure 1 for an example. Here the numbers associated with each vertex represent a specified hitting order. One may see that, after the reflection, it is now harder for a random walk starting from vertex 1 to reach vertex 2 without firsting hitting vertices 3, 4, and 5. An anonymous referee suggested to use Reimer's inequality to de-correlate excursions around l. Here we present a purely combinatorial argument not relying on strong probability tools. With Theorem 1.1, we could consider the problem of covering a nearest neighbor path in Z d . For any integer N ≥ 1, let ∂B 1 (0, N ) be the boundary of the L 1 ball in Z d with radius N . We say that a nearest neighbor path P = P 0 , P 1 , · · · , P K connects 0 and ∂B 1 (0, N ) if P 0 = 0 and inf{n : P n 1 = N } = K. And we say that a path P is covered by the first L steps of {X n } ∞ n=0 if Trace(P) ⊆ Trace(X 0 , X 1 , · · · , X L ).
Then we are able to use Theorem 1.1 to show that the covering probability of any such path can be bounded by that of the diagonal. Let On covering monotonic paths with simple random walk be the staircase path spiraling around the d-dimensional diagonal, where arc 1 [0 : d − 1] = 0, e 1 , e 1 + e 2 , · · · , d−1 i=1 e i and arc k = (k − 1) d i=1 e i + arc 1 . Remark 1.4. It can be useful to note that arc 1 [0 : d − 1] forms a nearest neighbor path in Z d from (0, 0, · · · , 0) to (1, 1, · · · , 1, 0), which "jumps" exactly d − 1 steps, and that arc k [0 : d − 1] is arc 1 [0 : d − 1] shifted by (k − 1) d i=1 e i . One may also note that arc k [0 : d − 1]'s are connected and together form a nearest neighbor spiral around the diagonal. Theorem 1.5. For each integers L ≥ N ≥ 1, let P be any nearest neighbor path in Z d connecting 0 and ∂B 1 (0, N ). Let X n , n ≥ 0 be a d dimensional simple random walk starting at 0. Then P Trace(P) ⊆ Trace(X 0 , · · · , X L ) ≤ P P ⊆ Trace(X 0 , · · · , X L ) .
The following main theorem gives an upper bound of the covering probability over all nearest neighbor paths connecting 0 and ∂B 1 (0, N ). Theorem 1.6. Let d ≥ 4 and let {X n } ∞ n=0 be a d dimensional simple random walk starting at 0. Then there is a P d ∈ (0, 1) such that for any nearest neighbor path P = (P 0 , P 1 , · · · , P K ) connecting 0 and ∂B 1 (0, N ), we always have Here P d is equal to the probability that {X n } ∞ n=0 ever returns to the d dimensional diagonal line.
Note that in Theorem 1.6 the upper bound is not very sharp since we only look at returning to the exact diagonal line for [N/d] times, which may cover at most 1/d of the total points in P. Although for any fixed d, we still have an exponential decay with respect to N , when d → ∞, such exponential decaying speed, which is lower bounded by 1 2d 1/d , goes to one. Fortunately, in Appendix A we are able to show that lim d→∞ 2dP d = 1, and then further find an upper bound on the asymptotic of the probability that a d dimensional simple random walk starting from some point in Trace P will ever return to Trace P . Note that we now need to return at least N times to cover all the points in P. We state this result as an additional theorem which is stronger than Theorem 1.6.
However, the proof of Theorem 1.7 is much more elaborate and is left in the Appendix.

Theorem 1.7.
There is a C ∈ (0, ∞) such that for any d ≥ 4 and any nearest neighbor path P = (P 0 , P 1 , · · · , P K ) ⊂ Z d connecting 0 and ∂B 1 (0, N ) and {X n } ∞ n=0 which is a d dimensional simple random walk starting at 0, we always have The proof of Theorem 1.7 can be found at the end of Section A.1.

Remark 1.8.
Actually, any C > 3/2 will serve as a good upper bound for sufficiently large d. See Remark A.5 in Appendix A for details.
EJP 25 (2020), paper 145. Remark 1.9. Note that we do not present a proof of Theorem 1.7 for d = 3. With Theorem 1.5 at hand, it is possible to prove some upper bounds by considering returns to an infinite transient subset of the diagonal. However this yields non sharp bounds and requires extra techniques. We consider this case in [13].
Remark 1.10. Note that the probability to cover a space filling curve in B 1 (0, N ) decays asymptotically slower that c N d . Sznitman [15,Section 2] showed that the probability a random walk path covers B 1 (0, N ) completely can be bounded below by ce −cN d−1 log N .
A natural generalization of Theorem 1.7 is to try applying the same reflection process in this paper but also consider the repetition of visits rather than just looking at the trace of the path. In other words, consider the probability that the random walk's local time along a certain path is larger than a sequence of given values. Note that the event the random walk covers a path is equivalent to the event that the random walk's local time along this path ≥ 1. However, it is shown that once we consider local time, the diagonal line (with repetition) no longer maximizes the covering probability. See Section 6 for details.
For the minimizer of covering probabilities over the family of monotonic nearest neighbor paths starting at 0, we also conjecture that the cover probability is minimized when the path goes straightly along a coordinate axis. I.e., Conjecture 1.11. For each integers L ≥ N ≥ 1, let P be any nearest neighbor monotonic path in Z d with length N . Let X n , n ≥ 0 be a d dimensional simple random walk starting at 0. Then P Trace(P) ⊆ Trace(X 0 , · · · , X L ) ≥ P → P ⊆ Trace(X 0 , · · · , X L ) where → P = (0, 0, · · · , 0), (1, 0, · · · , 0), · · · , (N − 1, 0, · · · , 0) . Remark 1.12. Note that the constants we get in Theorem 1.6 are not sharp. In fact, the upper bound we obtain for the covering of the diagonal path is of order (1/2d) N . If we use the same argument as in Theorem 1.6 for the straight line we will get a bound of [1/2(d − 1)] N , since a return to the straight line is equivalent to a d − 1 dimensional random walk returning to the origin. Thus we get that the bound we obtain is larger for the path that we conjecture minimizes the cover probability.
The structure of this paper is as follows: in Section 2 we prove a combinatorial inequality, which can be found later equivalent to finding a one-to-one mapping between nearest neighbor trajectories. In Section 3 we use this combinatorial inequality to prove Theorem 1.1. With Theorem 1.1, we construct a finite sequence of paths with non-decreasing covering probabilities in Section 4 to show that the covering probability is maximized by the path that goes along the diagonal, see Theorem 1.5 and 4.1. The proof of Theorem 1.6 is completed is Section 5, while in Section 6 we discuss the two conjectures and show numerical simulations. In Appendix A we prove that lim d→∞ 2dP d = 1 and then show that the probability a simple random walk returns to P also has an upper bound of O(d −1 ), which implies Theorem 1.7. In Appendix B we prove that the monotonicity fails when considering covering probability with repetitions.

Combinatorial inequalities
In this section, we discuss a combinatorial inequality problem, which can be found equivalent to finding a one-to-one mapping between nearest neighbor trajectories. For On covering monotonic paths with simple random walk For any m ∈ Z + , consider a collection of arcs which is a "vector" of subsets where each V k is called an arc, and an m dimensional vector D = (δ 1 , · · · , δ m ) ∈ {−1, 1} m which is called a configuration. Then we can introduce the inner product Moreover, for any subset A ⊆ Ω, we denote −A c ∪ A ⊂ −Ω ∪ Ω as the reflection induced by A. I.e., the reflection induced by A is when we keep A and reflect the rest to the negative. We say a configuration D of V covers the reflection A if be the subset of all such configurations.
In the simple random walk covering problem we wish to prove that the covering probability of a set is higher if it resides above some line than if some subsets of it are reflected below the line. The arcs will stand for a random walk path's excursions around a given line and D · V will specify which excursions are reflected. The next Lemma will conclude that there are more ways to reflect the random walk excursions to cover a set if non of its subsets are reflected.
Before proving the lemma we set some notations. For any n and m = m 0 + 1, we can separate the last arc V m0+1 and the rest of the arcs and look at the truncated system at m 0 . I.e., V [1 : and D[1 : m 0 ] = (δ 1 , · · · , δ m0 ).
We have for any A Noting that and that the two sets in (2.3) are disjoint, In order to study the cardinality of P, we first show that EJP 25 (2020), paper 145.
On covering monotonic paths with simple random walk where n e ( V ) is the number of empty sets in V 1 , · · · , V m . With the method of induction, suppose the desired inequality is true for all n < n 0 and all n = n 0 , m ≤ m 0 . Then for n = n 0 , m = m 0 + 1, with Lemma 2.2, we now know that there is one-to-one mapping between each configuration in P m V , A and its first m 0 coordinates. Note that Note that one may also write Moreover, we consider a new ambient environment Ω = V c m0+1 . Within Ω , one may consider the arc V k = V k ∩ V c m0+1 ⊆ Ω for each k and Moreover, let A = A ∩ V c m0+1 ⊂ Ω . Then we can similarly define We claim that In other words, in order to be in one of the 3 disjoint subsets above, we must guarantee that all points in Ω = V c m0+1 under reflection of A are covered by the configuration D of V [1 : m 0 ]. To verify (2.6), one can note that for any (2.7) In (2.7) we have the right hand side equals to EJP 25 (2020), paper 145. and the left hand side equals to Noting that for each k which shows that D is also in C V , A and thus verifies (2.6).
Specifically, when A = Ω, note that for any D ∈ C V , Combining (2.4)-(2.8) and the induction hypothesis, we have And thus the proof of Lemma 2.1 is complete.

Proof of Theorem 1.1
With the combinatorial inequality above, we can study the covering probability of simple random walks. Let N L be the set of all nearest neighbor paths starting at 0 of length L + 1 and consider 2 subsets of N L as follows: For the simple random walk {X n } ∞ n=0 starting at 0, it is easy to see that for each x = (x 0 , x 1 , · · · , x L ) ∈ N L , P (X n = x n , n = 1, 2, · · · , L) = 1 2d L .
Thus in order to prove Theorem 1.1, it suffices to show that |N L,1 | ≥ |N L,2 |.  To prove (3.1), we need to first partition N L into disjoint subsets, each of which serves as an equivalence class under the equivalence relation on N L described below.
For each x = (x 0 , x 1 , · · · , x L ) ∈ N L , let T 0 = 0, T 1 = inf{n : x n ∈ l}, and T n = inf{m > T n−1 : x m ∈ l} for each integer n ∈ [2, L] to be the time of the nth visit to l. Here we use the convention that inf{Ø} = ∞ and let T L+1 = ∞. Then for each n = 0, 1, · · · , L, define the nth arc of x as arc x, n = x[T n : In words, we keep the part of the path the same until it first visits l. Then for the nth arc, we keep it unchanged if δ n = 1 and reflect it around l if δ n = −1. By definition, it is easy to see that ϕ l,D x ∈ N L . And since ϕ l,D • ϕ l,D x ≡ x, ϕ l,D forms a bijection on N L . Now we can introduce the equivalence relation on N L previously mentioned. For each two x, y ∈ N L , we say x ∼ y if there exist a D ∈ {−1, 1} L such that (see Figure 2) ϕ l,D x = y.
Combining (3.2)-(3.4), we have that ∼ forms a equivalence relation on N L , where each path in N L belongs to one equivalence class. Thus all the equivalence classes are disjoint from each other and there has to be a finite number of them, forming a partition of N L .
We denote these equivalence classes as where each of them can be represented by its specific element x k,+ , k = 1, · · · , J, which is the unique path in each class that always stays on the left of l. Then for each k, let n k,1 < n k,2 < · · · < n k,m k be all the n's such that Trace arc x k,+ , n > 1.
Note that the only case when we have Trace arc x k,+ , n = 0 is when T n = T n+1 = ∞ and the only case when it equals to 1 is when T n = L. Then for any x ∈ C L,k and any D 1 , D 2 such that ϕ l,D1 x k,+ = ϕ l,D2 x k,+ = x, we must have δ 1,n k,i = δ 2,n k,i for all i's. So we have a well defined onto mapping f between C L,k and {−1, 1} m k where each x such that for some D is mapped to f x = (δ 2,n k,1 , δ 2,n k,2 , · · · , δ 2,n k,m k ).
Moreover, for any two configurations D 1 and D 2 such that δ 1,n k,i = δ 2,n k,i for all i's, we also must have ϕ l,D1 x k,+ = ϕ l,D2 x k,+ The reason of that is for all n not in Trace arc x k,+ , n ≤ 1 which means those arcs are either empty or just one point x Tn right on the diagonal, which does not change at all under any possible reflection. Thus we have proved that the mapping f is a bijection between C L,k and {−1, 1} m k .
At this point we have the tools we need and can go back to compare the two covering probabilities. Noting that the equivalence classes in (3.5) form a partition of N L , it suffices to show that for each k ≤ J for each class C L,k . First, if then one can immediately see EJP 25 (2020), paper 145.
And let n k = |Ω k |. We can also list all points in Ω k as ω 1 , · · · , ω n k and all points in ϕ l (Ω k ) as ω −1 , · · · , ω −n k , where ϕ l (ω j ) = ω −j for all j. Then it is easy to check that since all other points in A 0 ∪ B 0 are guaranteed to be visited by x k, Then by the constructions above we have for any

Then taking the intersections and lettinḡ
Noting that the mapping f is a bijection between C L,k and {−1, 1} m k , Apply Lemma 2.1 onΩ k , V k andĀ k . The proof of this theorem is complete.
Remark 3.1. With exactly the same argument, we can also have the same reflection theorem on reflecting over a line y = x + n, n ≥ 1 or x = −y + n.

Path maximizing covering probability
We first consider the simpler (but essential important) case when d = 2. We first outline the idea of the proof as follows: • To apply Theorem 1.1 specifically on covering the trace of a nearest neighbor path in P ⊂ Z 2 connecting 0 and ∂B 1 (0, N ), we can assume without loss of generality that the last point of this path, P K ∈ ∂B 1 (0, N ) is in the first quadrant. • For each such path with at least one point (x, y) that is not a "neighbor" of the diagonal, i.e. |x−y| ≥ 2, we can always reflect it as follows: (1) Consider l : x = y+1 or y = x + 1 be the axis of reflection. Then l divides Z 2 into 2 parts. (2) Let A 0 be the collection of points in our path that are in the same half as point 0, and let the remaining point in our path be ϕ l (B 0 ). Then by Theorem 1.1, one may replace ϕ l (B 0 ) with B and always increase the probability of covering.
• Then note that, after the reflection, A 0 ∪ B 0 is the trace of another nearest neighbor path, and we can reduce the total difference |x i − y i | by at least one in each step. After a finite number of steps, we will end up with a nearest neighbor path that stays within {|x − y| ≤ 1}.
• Finally, among all those such paths that of distance no more than one from the diagonal, applying Theorem 1.1 for reflection over x = y, we can show that the covering probability is maximized when we move all the "one step corners" to the same side of the diagonal, which itself gives us a monotonic path that stays within distance one above or below the diagonal. Thus we have the theorem as follows.
Theorem 4.1. For each integers L ≥ N ≥ 1, let P be any nearest neighbor path in Z 2 connecting 0 and ∂B 1 (0, N ). Let X n , n ≥ 0 be a 2 dimensional simple random walk starting at 0. Then Proof. As outlined above, we first show that be any nearest neighbor path in Z 2 connecting 0 and ∂B 1 (0, N ) with length K + 1 ≥ N + 1 where there is an i ≤ K such that |x i − y i | ≥ 2 and where x K ≥ 0, y K ≥ 0. X n , n ≥ 0 be a 2 dimensional simple random walk starting at 0. Then there exists a nearest neighbor path P 1 staying within {|x − y| ≤ 1} such that P Trace(P) ∈ Trace(X 0 , · · · , X L ) ≤ P Trace(P 1 ) ∈ Trace(X 0 , · · · , X L ) .
Proof. For any path Q with lenght K + 1, define its total difference as For each such path in this lemma, without loss of generality we can always assume there Otherwise, by definition one must have an i such that y i − i i ≥ 2. Then applying reflection over x = y, we are back to the first case. Consider the line of reflection l : x = y + 1 (otherwise consider l : y = x + 1). It is easy to see that there is at least one point along this path on the right side of l. I.e.  On covering monotonic paths with simple random walk Then noting that P is a nearest neighbor path starting at 0 with length K + 1, let C K,i be the equivalence class it belongs to under the relation ∼, and let P = x i be the representing element of C K,i where all arcs are reflected to the left of l. Then it is easy to see that Then note that for any j such that x j − y j ≥ 2, Thus the new nearest neighbor path P is also one connecting 0 and ∂B 1 (0, N ). Moreover, Then if there is a point (x j , y j ) in the new path P with |x j − y j | ≥ 2, we can repeat the previous process and the covering probability is non-decreasing. Noting that for each time we decrease the total difference by at least 2 while D T (P) is a finite number, after repeating a finite number of times, we must end up with a nearest neighbor path where no point satisfies |x − y| ≥ 2. Thus, we find a nearest neighbor path P 1 connecting 0 and ∂B 1 (0, N ) staying within {|x − y| ≤ 1} with a higher covering probability.
With Lemma 4.2, note that for any nearest neighbor path connecting 0 and ∂B 1 (0, N ) staying within {|x − y| ≤ 1}, we can always look at the part of it after its last visit to 0 and it has a higher covering probability. And note that such part has to contain a self-avoiding path from 0 to ∂B 1 (0, N ).
we have the following lemma whose proof is elementary and is omitted here: and that (j − 1, j) or (j, j − 1) ∈ Trace(P 1 ), ∀j = 1, 2, · · · , N 0 . This lemma guarantees that such self-avoiding path has to be monotonic as well.
Otherwise, suppose the path contains any decreasing edge, say (j, j) → (j, j − 1). Then vertex (j, j) has to be visited more than once, which contradict with the self-avoiding condition. Thus it is sufficient to show that P has the highest covering probability over all nearest neighbor monotonic paths P 1 connecting 0 and ∂B 1 (0, N ) that stay within EJP 25 (2020), paper 145.
Then we haveĀ ∪B =Ω, so that

And note that
which itself gives a monotonic nearest neighbor path connecting 0 and ∂B 1 (0, N ). So Theorem 1.1 finishes the proof.
For fixed N , the inequality in Theorem 4.1 becomes equality when L = ∞ since the 2 dimensional simple random walk is recurrent and both probabilities go to one. However, we can easily generalize the same result to higher dimensions. This will similarly give us Theorem 1.5.
Proof of Theorem 1.5. This theorem can be proved by reflecting only on two coordinates in Z d while keeping all the others unchanged. For any n ≥ 0, we look at, without loss of generality, the subspace l : a 1 = a 2 + l 0 , l 0 ≥ 0 when d ≥ 3, and define reflection ϕ l over l as follows: for each point (a 1 , · · · , a d ) ∈ Z d , ϕ l (a 1 , · · · , a d ) = (a 2 + l 0 , a 1 − l 0 , a 3 , · · · , a d ).
Then for all paths in N L (all nearest neighbor paths starting at 0 of length L + 1), we can again define T 0 = 0, T 1 = inf{n : x n ∈ l}, and T n = inf{n ≥ T n−1 : x n ∈ l} for each integer n ∈ [2, L] to be the time of the nth visit to subpsace l, and divide N L into a partition of equivalence classes under ϕ l,D for all D ∈ {−1, 1} L . Then for each pair of disjoint finite For each equivalence class C L,k as above, the exact same argument as in the proof of Theorem 1.1 guarantees that EJP 25 (2020), paper 145.
Then apply (4.6) to any nearest neighbor path connecting 0 and ∂B 1 (0, N ) P = (P 0 , P 1 , P 2 , · · · , P K ) where K ≥ N . And without loss of generality we can also assume that P L ∈ (Z + ∪ {0}) d . Let the subspace of reflection be l : a 1 = a 2 + 1, Without loss of generality we can assume B 0 is not empty, note that Trace(P) = A 0 ∪ B 0 , and that similar to the proof of Theorem 4.1, we can again let P be the representing element in the equivalence class under ∼ that contains P, which is another nearest neighbor path connecting 0 and ∂B 1 (0, N ) where all the arcs are reflected to the same side of l as 0. Then Trace(P ) = A 0 ∪ B 0 , and Trace(P) Moreover, define be the total difference of P. Then note that for each n i,j≤d For each n such that P n = (p n,1 , · · · , p n,d ) ∈ A 0 , we always have Otherwise, we must have P n ∈ B 0 and there must always be a P n = ϕ l (P n ) ∈B 0 ⊆ B 0 , which implies that i,j≤d |p n,i − p n,j | = |p n,1 − p n,2 | + f n (p n,1 ) + f n (p n,2 ) + 3≤i,j≤d |p n,i − p n,j |.
On covering monotonic paths with simple random walk And since P n ∈ B 0 , p n,1 ≥ p n,2 + 2, so that for p n,1 = p n,2 + 1 and p n,2 = p n,1 − 1, we must have max{p n,1 , p n,2 } > max{p n,1 , p n,2 }, min{p n,1 , p n,2 } < min{p n,1 , p n,2 }, (4.8) which implies that |p n,1 − p n,2 | < |p n,1 − p n,2 |. Then combining (4.8), and that p n,1 + p n,2 = p n,1 + p n,2 with the fact that f n (p) is convex, we also have f n (p n,1 ) + f n (p n,2 ) ≤ f n (p n,1 ) + f n (p n,2 ) which further implies that D T (P) ≥ D T (P ) + 1. Again, noting that D T (P) itself is finite, so after at most a finite number of iterations, we will end up with a path P 1 connecting 0 and within region At the same time, note that we have assumed P L ∈ (Z + ∪ {0}) d . So by (4.8) (and its parallel versions for other pairs of coordinates), the end point of P 1 remains in (Z + ∪{0}) d and thus has the same L 1 norm as P L , which is N . Thus, P 1 remains a path connecting 0 and ∂B 1 (0, N ). Moreover, it is easy to see that for each point a 0 = (a 1,0 , a 2,0 , · · · , a d,0 ) in region R and each subspace l : a i = a j , a 0 = ϕ l ( a 0 ) must satisfy Similar to the argument in the proof of Theorem 4.1, one may apply reflection over a 2 = a 1 towards 0, which reflects points in {a 2 = a 1 + 1} to {a 1 = a 2 + 1}. And then similar reflections can be applied over a 3 = a 1 , · · · and a d = a 1 . We will have a sequence of paths P 2,i , i = 2, · · · d in R with covering probabilities that never decrease. Moreover, by the definition of our reflections, for each n ≤ K and 2 ≤ j ≤ d let p 2,i,n,j be the jth coordinate of the nth vertex in P 2,i . We have that {p 2,i,n,1 } d i=2 is nondecreasing while {p 2,i,n,j } d i=2 is nonincreasing, and that p 2,j,n,1 ≥ p 2,j,n,j , ∀2 ≤ j ≤ d.
Thus for P 2 = P 2,d , we must have p 2,n,1 ≥ max 2≤j≤d p 2,n,j (4.10) for all n ≤ K. Then we reflect P 2 over a 3 = a 2 , a 4 = a 2 , · · · , and a d = a 2 which also gives us a sequence of paths P 3,i , i = 3, · · · d in R with covering probabilities that never decrease. Letting P 3 = P 3,d , similarly we must have p 3,n,2 ≥ max 3≤j≤d p 3,n,j . Moreover recalling the formulas of reflections within R in (4.9), all reflections over a i = a 2 , i ≥ 3 will not change max 2≤j≤d p 2,n,j for any n. Thus, we still have (4.10) holds for P 3 . Repeating this process and we will have a sequence P 4 , P 5 , · · · , P d with covering probabilities that never decrease, where each of them stays within R. And finally for P d , we must have p d,n,i ≥ max  With Theorem 1.5, the proof of Theorem 1.6 follows immediately from the fact that the simple random walk on R d , d ≥ 4 returns to the one dimensional line x 1 = x 2 = · · · = x d with probability less than 1. Note that for any nearest neighbor path P = (P 0 , P 1 , · · · , P N ) and {X n } ∞ n=0 connecting 0 and ∂B 1 (0, N ) which is a d dimensional simple random walk, be the points in P on the diagonal, we always have by Theorem 1.5, Moreover, let {τ n } ∞ n=1 be the sequence of stopping times of all visits to the diagonal line To bound the probability on the right hand side of (5.1), define a new Markov process Note that we can also write τ n = inf{n > τ n−1 , Y n = 0} and that Y n itself is a d − 1 dimensional random walk with generator And thus the proof of Theorem 1.6 complete.

Discussions
In this section we discuss the conjectures and show numerical simulations.

Covering probabilities with repetitions
In the proof of Theorem 1.5, note that each time we apply Theorem 1.1 and get a new path P with higher covering probability, we always have . This, together with the fact that A 0 is disjoint with both B 0 and B 0 , implies that |Trace(P)| = |A 0 | + |B 0 | ≥ |A 0 | + |B 0 | = |Trace(P )|.
In words, although the length of the path remains the same after reflection, the size of its trace may decrease. In fact, for any simple path connecting 0 and ∂B 1 (0, N ), at the end of our sequence of reflections, we will always end up with a (generally non-simple) path whose trace is of size N + 1.
One natural approach towards a sharper upper bound is taking the repetitions of visits in a non-simple path into consideration. For any path P = P 0 , P 1 , · · · , P K starting at 0 which may not be simple, and any point P ∈ Trace(P), we can define the first visit to P as T 1,P = inf{n : P n = P } and T k,P = inf{n > T k−1 : P n = P } to be the kth visit, with convention inf{Ø} = ∞. Then we can define the repetition of P ∈ Trace(P) in the path P as n P,P = sup{k : T k,P < ∞} (6.1) and denote the collection of all such repetitions as N P = {n P,P : P ∈ Trace(P)}. It is to easy to see that n P,P ≡ 1 for all P ∈ Trace(P) when P is a simple path, and that P ∈Trace(P) n P,P = K + 1.
For d dimensional simple random walk {X n } ∞ n=0 starting at 0 and any point P ∈ Z d , we can again define the stopping times τ 0,P = 0, τ 1,P = inf{n : X n = P } and τ n,P = inf{n > τ n−1,P : X n = P } Then we have Definition 6.1. For each nearest neighbor path P, and d dimensional simple random walk {X n } ∞ n=0 , we say that {X n } L n=0 covers P up to its repetitions if ξ(L, p) ≥ n P,P , ∀P ∈ Trace(P).
And we denote such event by Trace(P) ⊗ N P ⊆ {X n } L n=0 .
Our hope was, for any nearest neighbor path P and subspace of reflection like l : x i = x j + l 0 , the probability of a simple random walk {X n } L n=0 starting at 0 covering P up to its repetitions may be upper bounded by that of covering the path P up to its repetitions, where P is the representing element in the equivalence class in N K containing P under the reflection ϕ l . In words, P is the path we get by making all the arcs in P reflected to the same side as 0.
Note that although P may not be simple and the size of its trace could decrease, this will at the same time increase the repetition on those points which are symmetric to the disappeared ones correspondingly. In fact, under Definition 6.1, the total number of points our random walk needs to (re-)visit is always P ∈Trace(P ) n P,P = K + 1.
So if our previous guess were true, then we will be able to follow the same process as in Section 3 and 4 and end with the same path along the diagonal, but this time with a higher probability of being covered up to its repetitions. Unfortunately, here we present the counterexample and numerical simulations showing that Theorem 4.1 and 1.5 no longer holds for of certain non-monotonic paths. The idea of constructing those examples can be seen in the following preliminary model: Let l be the line of reflection and suppose we have one point x on the same side of l as 0. Then suppose there is a equivalence class C L,k with its representative element x k having 2n arcs each visiting x once. Then we look at the covering probability of {x, ϕ l (x)} ⊗ (n, n) and that of its reflection {x} ⊗ 2n. For the first one, we only need to choose n of 2n arcs in x k and reflect them to the other side while keeping the rest unchanged. So we have 2n n configurations. However, for the second covering probability which one may hope to be higher, the only configuration that may give us the covering up to this repetition is x k itself. Thus, at least in this equivalence class, the number of configurations covering {x, ϕ l (x)} ⊗ (n, n) is larger than that of configurations covering {x} ⊗ 2n.
With this idea in mind we give the following counterexample on actual 3 dimensional paths which shows precisely and rigorously that the covering probability is not increased after reflection. which is the representative element of the equivalence class containing P, under reflection over l : x 2 = x 1 . Let {X n } ∞ n=0 be a simple random walk starting at 0. Moreover, we use the notation A = Z 3 \ {y, z, w} and define stopping times τ a = inf{n ≥ 1 : X n = a} for all a ∈ Z 3 , and τ A = inf{n ≥ 1 : X n / ∈ A}. Thus we have Proposition 6.2. For the paths P and P defined above, The proof of Proposition 6.2 is basically a standard application of Green's functions for finite subsets. So we leave the detailed calculations in Appendix B. For anyone who believes in law of larger numbers, we recommend them to look at the following numerical simulation which shows the empirical probabilities (which almost exactly agree with Proposition 6.2) of covering both paths with half a million independent paths of 3-dimensional simple random walks run up to L = 40000.
For a finite length {X n } L n=0 with L < ∞, although it is harder to calculate the exact covering probabilities theoretically, the following simulations on L = 4000, 400 and 40 show that the inequality in Proposition 6.2 remains robust for fairly small L (see

Monotonic path minimizing covering probability
In Conjecture 1.11, we conjecture that when concentrating on monotonic paths, the covering probability is minimized when the path takes a straight line along some axis.
The intuition is, while all monotonic paths connecting 0 and ∂B 1 (0, N ) have the same L 1 distance, the L 2 distance is maximized along the straight line, which makes it the most difficult to cover. This conjecture is supported for small N . In the following example, we have d = 3 and N = 3. By symmetry of simple random walk, one can easily see that for each monotonic path of length N + 1 = 4, starting at 0, the covering probability must equal to that of one of the following five:  1, 1, 1).
The following simulation (see Figure 4) shows that when L = 400, the covering probability of path 1 is the smallest of them all. It should be easy to use the same calculation in Proposition 6.2 to show the rigorous result when L = ∞.
EJP 25 (2020), paper 145.  In this appendix, we find the asymptotic behavior of the returning probability (to the diagonal line and the path P) of a d dimensional simple random walk as d → ∞. For asymptotics of the return probability to the origin, the result is stated in [10]. To be precise, for a d dimensional simple random walk {X d,n } ∞ n=0 starting at 0 and any x ∈ Z d , define the stopping time Then the returning probability is defined by In [10], it is stated that lim d→∞ 2dp d = 1. However, we believe the proof in [10] is not completely rigorous. Rigorous proof of the asymptotic above can be found in [6], and then independently in [4]. Moreover, using the same method, one may also show the higher order asymptotic of p d , which is stated in [3].
In this appendix, we apply a similar method on non-simple random walks. Particularly, for a specific d − 1 dimensional one defined bŷ where X i d,n is the ith coordinate of X d,n , we can show the same asymptotic forX d−1,n , which also gives us the asymptotic of the probability that a d dimensional simple random walk ever returns to the diagonal line. To make the statement precise, consider the diagonal line in Z d l d = {(n, n, · · · , n) ∈ Z d , n ∈ Z} ⊂ Z d .
and let be the returning probability to l d .
With Theorem A.1, we further look at the probability that a d dimensional simple random walk starting from some point in Trace P will ever return to Trace P . Note that for each point or there must be some 1 ≤ k < d and 0 ≤ n ≤ N/d such that Thus when looking atx = ( we must have either x = 0 orx = e d−1,i for some i = 1, 2, · · · , d. In this appendix, we will use the notation e d−1,0 = 0 and let One can immediately see that when simple random walk X d,n starting from some point in Trace P returns to Trace P , we must have that the corresponding non simple random walkX d−1,n starting from D d−1 returns to D d−1 . Thus for any simple random walk X d,n starting at 0, define the stopping times T d,0 = 0 T d,1 = inf n ≥ 1 : X d,n ∈ Trace P , and T d,k = inf n ≥ T d,k−1 : X d,n ∈ Trace P for all k ≥ 2 with the convention inf{n ≥ ∞} = ∞. And forX d−1,n also starting at 0, and any 0 ≤ i, j ≤ d − 1, define the stopping time Then it is easy to see that for any k ≥ 0 With basically similar but more complicated technique as in the proof of Theorem A.1 we have There is a C < ∞ such that for all d ≥ 4, And the proof of Theorem 1.7 is complete.

A.2 Useful facts from calculus
In this section, we list some very basic but useful facts from calculus that we are going to use later in the proof.

A.3 Returning probability to the diagonal line
In this section we prove Theorem A.1. Recalling that we have X d,n ∈ l d if and only if X 1 d,n = X 2 d,n = · · · = X d d,n , which in turns is equivalent tô X d−1,n = 0. And for the new processX d−1,n , one can easily check that it also forms a EJP 25 (2020), paper 145. d − 1 dimensional random walk with transition probability P (X d−1,1 = ±e d−1,1 ) = 1 2d so thatX d−1,n also forms a finite range symmetric random walk. Moreover, the characteristic function of the increment ofX d−1,n is given bŷ And we also haveτ is the Green's function forX d−1,n . I.e., Then we only need to show that Moreover, using exactly the same embedded random walk argument as in Lemma 1 of [11] on X d,n and τ d,l d , one can immediately have P d+1 ≤ P d , which is also equivalent tô G d (0) ≤Ĝ d−1 (0). So in order to show (A.8), we can without loss of generality concentrate on even d's.
Thus one can see that the first 3 integration terms in (A.9) satisfy the followings: And we haveĜ dθ. (A.10) And we only need to show that for sufficiently large even d To show (A.11), we rewrite the integral above into the expectation of some function of a sequence of i.i.d. random variables. LetX 1 ,X 2 , · · · ,X d−1 be i.i.d. uniform random variables on [−π, π], we can definê Then according to our construction and the definition ofÊ d , we havê ].
So we have for sufficiently large even number d = 2n Moreover note that for d = 2n we have Apply Cramér's Theorem onŶ 1,d−1 andŶ 2,d−1 , we have that there is some u, U ∈ (0, ∞) (actually we can use u = 1/16 and U = 2) such that Lastly for max ω∈B c d−1 Proof. Let d 0 be a positive integer such that √ cd 2 0 > 1. Then for any d ≥ d 0 and any (x 1 , · · · , x d−1 ) ∈ D 1 (d), by the definition of D 1 (d) and the fact that x 1 ∈ [−π, π], we must have cx 2 which implies that Thus we must have |x k−1 − x k | ≤ 3π/2, which gives that And now we have a contradiction.

A.4 Proof of Theorem A.2
With the asymptotic of the return of and we can without loss of generality again concentrate on even numbers of d's. Then for each i, one can immediately have Thus, in order to prove Theorem A.2, it is sufficient to show that for all sufficiently large even d's, there is a C < ∞ such that for any 0 ≤ i ≤ d − 1 (A.28) Thus, we will concentrate on controlling Using the same technique as in the proof of Theorem A.1, and noting that and we callÊ to be the tail term. For any 0 ≤ i = j ≤ d − 1, let d(i, j) be their distance up to mod(d). I.e., Proof. By symmetry we can without generality assume that j > i. Recalling that where we use the convention that θ 0 = θ d = 0. For each term in the summation above, it is easy to see that there is some nonnegative integers k 0 , · · · , k d−1 with Thus, it is sufficient to show that for any nonnegative integers k 0 , · · · , k d−1 with First, if i = 0 then we have j > k and d − j > k. Thus we can separate the product in (A.32) as If k j−1 + k j is an even number, integrate over θ j and 6 gives us (A.33). If k j−1 + k j is odd, without loss of generality we can assume k j−1 is odd. Noting that by the pigeon hole principle we must have at least one of those k h 's to be zero, which is even. Thus, let h 0 = sup h≤j−1 {k h is even}. Then h 0 ∈ [0, j − 2], where we use the standard convention that sup{Ø} = −∞ and inf{Ø} = ∞. By definition k h0+1 is odd, and thus cos(θj) Note that k h0 + k h0+1 is odd, so we integrate over θ h0+1 and 6 again gives us (A.33). Symmetrically, if we have k j is odd, then we can look at h 1 = inf h≥j {k h is even} and have h 1 ∈ [j + 1, d − 1]. This in turn implies that k h1 + k h1−1 is odd, so we integrate over θ h1 and use 6 . We use the same argument in the following discussions.
Similarly if i > 0, with d(i, j) > k implying j − i > k as well as d + i − j > k, we can also have EJP 25 (2020), paper 145.
And again note that So if either k i−1 + k i or k j−1 + k j is a even number, 6 again gives us (A.33). Now suppose both k i−1 + k i and k j−1 + k j are odd. If either k i or k j−1 is odd, we can without loss of generality assume the odd one is k j−1 . Note that . Then again we have that k h0 + k h0+1 is odd, so we can integrate over θ h0+1 and 6 again gives us (A.33).
Otherwise, we must have both k i−1 and k j are odd numbers. Again note that At least one of the k h 's above must be 0, and let's say again without loss of generality it is , and k h0 + k h0+1 is odd so we can once again integrate over θ h0+1 to use 6 to gives us (A.33). Combining all the possible situations together, the proof of this lemma is complete.
With Lemma A.4, one can immediately see that for any 0 ≤ i ≤ d − 1 and any j such that d(i, j) ≥ 6, EJP 25 (2020), paper 145.
Then recalling that in the proof Theorem A.1 we haveX 1 ,X 2 , · · · ,X d−1 be i.i.d. uniform random variables on [−π, π] and And we defineZ Then again we have for any i, jÊ ].

Then recallB
We can similarly have Thus we have shown that all terms in this finite summation is either 0 or O(d −1 ). Take C = 28 and the proof of Theorem 1.7 is complete.
Remark A.5. It is clear that the upper bound C = 28 we find here is not precise since here we only want the right order and are actually having very weak upper bounds for those 55 terms in the summation. Actually, any C > 3/2 will be a good upper bound for sufficiently large d. Among the 55 terms in the summation, one can easily see that the term j = i, p = 2 and the two terms with d(i, j) = 1, p = 1 are the only ones ∼ d −1 and each of them is 1/(2d) + o(d −1 ). All the other terms are either 0 or o(d −1 ). The calculation is trivial calculus but very tedious, especially for someone who is reading (or writing) this not too short paper.

B
In this appendix we prove that the monotonicity fails when considering covering probability with repetitions.