Minimal dispersion on the cube and the torus

We improve some upper bounds for minimal dispersion on the cube and torus. /Our new ingredient is an improvement of a probabilistic lemma used to obtain upper bounds for dispersion in several previous works. Our new lemma combines a random and non-random choice of points in the cube. This leads to better upper bounds for the minimal dispersion.


Introduction
For a given set of points P ⊂ Q d := [0, 1] d its dispersion is defined as the supremum over volumes of axis-parallel boxes in Q d that do not intersect P .Then the minimal dispersion on the cube is defined as the infimum of the dispersions of all possible subsets P ⊂ Q d of cardinality n.The dispersion on the torus is defined similarly.This notion goes back to [15], where a notion from [10] was modified.Often it is more convenient to work with its inverse function, which, given a positive ε, measures the smallest positive integer N = N (ε, d) such that there exists a configuration X of N points in [0, 1] d with the property that any axis-parallel box of volume exceeding ε contains at least one point.We refer to [12] and references therein for the history of the question and related references.
In this note we improve some upper bounds for the minimal dispersions on the cube and on the torus and for its inverse function.
Several recent proofs on upper bounds on N use the following scheme.In the first step, approximate the set of axis-parallel boxes in Q d by a finite set N , with the property that every axis-parallel box of volume at least ε contains at least one element of N .In the second step, construct a set P such that each element of N intersects P , in which case we say that P is a piercing set for N .To ensure that P pierces N , the points in P are taken randomly, according to the uniform distribution on the cube, and subsequently a union bound is applied to verify that such random P with high probability intersects every element of N .
Our main new ingredient is in the improvement of the second step, where a random choice of points is followed up by a deterministic phase.More precisely, our new probabilistic Lemma 3.3 will have two phases.In the first phase we choose a (smaller) set Q of random points and estimate how many sets in our approximation N have an empty intersection with Q (in previous proofs one insisted that every set in N intersects Q).In a second phase we take care of the "empty" sets in N (the ones that do not intersect Q) by choosing a representative point in each such set.The set Q 0 of representative points together with Q forms a piercing set for N .
The new Lemma 3.3 subsequently leads to better upper bounds on minimal dispersion, see (4).However in doing the second step we lose randomness, so our new bound does not hold for a random choice of points.Moreover, it is known that our bound cannot hold in the random setting, for instance see [8], where the authors provide lower bounds on the inverse of minimal dispersion for a random set of points (formula (2) below).
We would also like to mention that the idea of choosing a random set of points in a first phase and then completing it by a deterministic choice (depending on a realization of a random set) in the second phase is not new and was used in literature, see e.g.[14], or [1, Chapter 3].

Notation, preliminaries and main result
Throughout the paper the following notation is used.For positive integer d define the unit cube Q d = [0, 1] d .We will use | • | to denote both volume of a subset of R d and a cardinality of a finite set.Letters C, C 0 , C 1 , ..., c, c 0 , c 1 , ... always denote absolute positive constant (independent of all other parameters and sets), whose values may change from line to line.
Define a set of all axis-parallel rectangles in Q d by For a set P ⊂ Q d define its dispersion by The minimal dispersion disp * is the following function of positive integers n, d, Finally, we define an inverse of minimal dispersion as Most of the results will be stated in terms of the inverse of minimal dispersion.We also will be dealing with the dispersion on the torus, which is defined similarly.Let where Then we follow definitions above with boxes taken from and The behavior of dispersion has been intensively studied during the last decade, and it turns out that it behaves differently in different regimes.When ε is extremely small with respect to d, namely ε < d −d 2 , the best known bound was obtained in [4], where the authors proved that This improved the bound C d /ε with C d being exponential in d, obtained in previous works [15,5,2].Furthermore, the authors of [4] have also shown that for ε ≤ (4d On the other hand, for relatively large ε, namely for ε > 1 d ln 2 d ln ln d , the best upper bound was obtained in [11], improving previous works [17] and [20], Very recently it was shown in [18] that this bound is sharp (up to a logarithmic factor) whenever 1/4 ≥ ε ≥ 1/4 √ d, more precisely in this regime one has We would like to mention here the bound from [2], which holds for all ε < 1/4, and which was the first (lower) bound showing that the dispersion grows with the dimension, We would also like to note that for large ε, namely ε ∈ (1/4, 1/2), the upper bound does not grow with dimension.From [13] (see also [17]) we have Clearly, for ε ≥ 1/2 one has N (ε, d) = 1 by taking the point (1/2, ..., 1/2).However, for ε not so large and not so small with respect to d the picture is different.The best known bound was obtained in [12], improving previous results from [3,16,11], namely The proof in [12] shows that a random choice of independent points uniformly distributed in the cube works.Moreover, it was shown in [8] that using such a random choice of points, one cannot expect anything better than that is, the bound (1) is sharp up to double logarithmic factor for a random choice of points.In this note we improve (1) by eliminating the ln(1/ε) summand.Of course, as the previous formula shows, the improvement cannot be achieved for a random choice of points.Before formulating our main result we briefly discuss the dispersion on the torus.In [19] the lower bound was obtained valid for all ε ∈ (0, 1), It is interesting to note that contrary to the non-periodic case, the lower bound is at least linear in d and always grows with d, even for ε > 1/2.The best known upper bound was obtained in [12], improving previous bounds from [11] and [16], namely Here, the random choice of points was also used.In this note we improve (3) to Cd(ln ln(e/ε) + ln(2d))/ε (which is always better), however the choice of points is, once again, not random.
The main result of this paper is the following.Thus, combining bounds of Theorem 2.1 with previously known bounds, the current state of the art for the inverse minimal dispersion on the cube is which can be summarized in Figure 1.

A new probabilistic lemma
For ε > 0 consider sets of all boxes from R d (resp., R d ) of volume at least ε, i.e., Notice that these collections are infinite, and we use a usual approach and approximate them by finite collections.Following [12], we define a δ-approximations, which is a slight modification of the notion introduced in [11].An essentially same notion was recently considered in a similar context by M. Gnewuch [7].In several works on minimal dispersion a variant of the following probabilistic lemma was a key ingredient (Theorem 1 in [16], Lemma 2.3 with Remark 2.4 in [11], Lemma 2.2 in [12]).Lemma 3.2.Let d ≥ 1 and ε, δ ∈ (0, 1).Let N be a δ-approximation for B d (ε) and let N be a δ-approximation for Moreover, the random choice of independent points (with respect to the uniform distribution on Q d ) gives the result with probability at least 1 − 1/|N |.
Our next lemma improves the bounds of Lemma 3.2.We would like to emphasize that our proof uses two phases: the first random phase similar to the initial proof of Lemma 3.2 followed up by the second non-random phase.Notice that unlike Lemma 3.2, the bounds in Lemma 3.3 will not hold for a random choice of independent points in Q d , in view of (2).Let M be a positive integer which will be specified later.Consider a collection X = {x 1 , . . ., x M } of points in Q d chosen independently and uniformly at random from Q d .
For each A ∈ N consider the "bad" event B A = {A ∩ X = ∅}.By assumptions on the volume of sets in N and by independence of x i 's, for all A ∈ N we have Next let b = b(X ) be the random variable that counts the number of bad events, i.e., b counts the number of sets in N that do not intersect X .Clearly, where χ E denotes the indicator of the event E. Then Therefore there is an instance Finally, let Q 0 be a minimal collection of points which has a non-empty intersection with every set in N 0 (we always have |Q 0 | ≤ |N 0 |), and define P = Q ∪ Q 0 .Then, by construction, for all A ∈ N , we have and choose which completes the proof.
Remark.Note that in order to have a random choice of points, i.e., in order to show that for all A ∈ N one has A ∩ X ̸ = ∅, one needs to work with the event b = 0.It is easy to see that for M = ⌊(3 ln |N |)/δ⌋ one has This proves Lemma 3.2 which was used in [16,11,12].
4 Completing the proof of Theorem 2.1 To complete the proof we use the following result from [12] (Propositions 3.2 and 3.3).Hence, Lemma 3.3 now implies the result.

Definition 3 . 1
(δ-approximation of B d (ε) and B d (ε)).For 0 < δ ≤ ε ≤ 1 we say that a collection N ⊆ R d is a δ-approximation for B d (ε) iff for all B ∈ B d (ε) there is B 0 ∈ N such that B 0 ⊆ B and |B 0 | ≥ δ.We define a δ-approximation for B d (ε) in a similar way.