The Total Acquisition Number of Random Geometric Graphs

Let $G$ be a graph in which each vertex initially has weight 1. In each step, the weight from a vertex $u$ to a neighbouring vertex $v$ can be moved, provided that the weight on $v$ is at least as large as the weight on $u$. The total acquisition number of $G$, denoted by $a_t(G)$, is the minimum cardinality of the set of vertices with positive weight at the end of the process. In this paper, we investigate random geometric graphs $G(n,r)$ with $n$ vertices distributed u.a.r. in $[0,\sqrt{n}]^2$ and two vertices being adjacent if and only if their distance is at most $r$. We show that asymptotically almost surely $a_t(G(n,r)) = \Theta( n / (r \lg r)^2)$ for the whole range of $r=r_n \ge 1$ such that $r \lg r \le \sqrt{n}$. By monotonicity, asymptotically almost surely $a_t(G(n,r)) = \Theta(n)$ if $r<1$, and $a_t(G(n,r)) = \Theta(1)$ if $r \lg r>\sqrt{n}$.


Introduction
Gossiping and broadcasting are two well studied problems involving information dissemination in a group of individuals connected by a communication network [8]. In the gossip problem, each member has a unique piece of information which she would like to pass to everyone else. In the broadcast problem, there is a single piece of information (starting at one member) which must be passed to every other member of the network. These problems have received attention from mathematicians as well as computer scientists due to their applications in distributed computing [3]. Gossip and broadcast are respectively known as "all-to-all" and "one-to-all" communication problems. In this paper, we consider the problem of acquisition, which is a type of "all-to-one" problem. Suppose each vertex of a graph begins with a weight of 1 (this can be thought of as the piece of information starting at that vertex). A total acquisition move is a transfer of all the weight from a vertex u onto a neighbouring vertex v, provided that immediately prior to the move, the weight on v is at least the weight on u. Suppose a number of total acquisition moves are made until no such moves remain. Such a maximal sequence of moves is referred to as an acquisition protocol and the vertices which retain positive weight after an acquisition protocol is called a residual set. Note that any residual set is necessarily an independent set. Given a graph G, we are interested in the minimum possible size of a residual set and refer to this number as the total acquisition number of G, denoted a t (G). The restriction to total acquisition moves can be motivated by the so-called "smaller to larger" rule in disjoint set data structures. For example, in the UNION-FIND data structure with linked lists, when taking a union, the smaller list should always be appended to the longer list. This heuristic improves the amortized performance over sequences of union operations.
The third author is supported in part by NSERC and Ryerson University. 1 Example: The weight of a vertex can at most double at every total acquisition move, and so a vertex with degree d can carry at most weight 2 d . (We will later use this fact in Observation 2.1.) An acquisition protocol for a cycle C 4k (for some k ∈ N) that leaves a residual set of every fourth vertex is the best we can do; see Figure 1. Therefore, a t (C 4k ) = k. The parameter a t (G) was introduced by Lampert and Slater [10] and subsequently studied in [14,11]. In [10], it was shown that a t (G) ≤ n+1 3 for any connected graph G on n vertices and that this bound is tight. Slater and Wang [14], via a reduction to the three-dimension matching problem, showed that it is NP-complete to determine whether a t (G) = 1 for general graphs G. In LeSaulnier et al. [11], various upper bounds on the acquisition number of trees were shown in terms of the diameter and the number of vertices, n. They also showed that a t (G) ≤ 32 log n log log n (here log n denotes the natural logarithm but throughout the paper we mostly use lg n, the binary logarithm) for all graphs with diameter 2 and conjectured that the true bound is constant. For work on game variations of the parameter and variations where acquisition moves need not transfer the full weight of vertex, see [16,13,15].
Randomness often plays a part in the study of information dissemination problems, usually in the form of a random network or a randomized protocol, see e.g. [4,5,6]. The total acquisition number of the Erdős-Rényi-Gilbert random graph G(n, p) was recently studied in [2], where potential edges among n vertices are added independently with probability p. In particular, LeSaulnier et al. [11] asked for the minimum value of p = p n such that a t (G(n, p)) = 1 asymptotically almost surely (see below for a formal definition). In [2] it was proved that p = lg n n ≈ 1.4427 log n n is a sharp threshold for this property. Moreover, it was also proved that almost all trees T satisfy a t (T ) = Θ(n), confirming a conjecture of West. Another way randomness can come into the picture is when initial weights are generated at random. This direction, in particular the case where vertex weights are initially assigned according to independent Poisson distributions of intensity 1, was recently considered in [7].
In this note we consider the random geometric graph G(X n , r n ), where (i) X n is a set of n points located independently uniformly at random in [0, √ n] 2 , (ii) (r n ) n≥1 is a sequence of positive real integers, and (iii) for X ⊆ R 2 and r > 0, the graph G(X , r) is defined to have vertex set X , with two vertices connected by an edge if and only if their spatial locations are at Euclidean distance at most r from each other. As typical in random graph theory, we shall consider only asymptotic properties of G(X n , r n ) as n → ∞. We will therefore write r = r n , identifying vertices with their spatial locations and defining G(n, r) as the graph with vertex set [n] = {1, 2, . . . , n} corresponding to n locations chosen independently uniformly at random in [0, √ n] 2 and a pair of vertices within Euclidean distance r appears as an edge. For more details see, for example, the monograph [12]. Finally, we say that an event in a probability space holds asymptotically almost surely (a.a.s.), if its probability tends to one as n goes to infinity.
We are going to show the following result. Theorem 1.1. Let r = r n be any positive real number. Then, a.a.s. a t (G(n, r)) = Θ(f n ), where

Lower Bound
Let us start with the following simple but useful observation.
If v ∈ V is to acquire weight w (at any time during the process of moving weights around), then deg(v), the degree of v, is at least lg w. Moreover, all vertices that contributed to the weight of w (at this point of the process) are at graph distance at most lg w from v.
Proof. Note that during each total acquisition move, when weight is shifted onto v from some neighbouring vertex, the weight of v can at most double. Thus, v can only ever acquire 1 + 2 + . . . + 2 deg(v)−1 , in addition to the 1 it starts with, and so v can acquire at most weight 2 deg(v) . To see the second part, suppose that some vertex u 0 moved the initial weight of 1 it started with to v through the path (u 0 , u 1 , . . . , u k−1 , u k = v). It is easy to see that after u i−1 transfers its weight onto u i , u i has weight at least 2 i . So if u 0 contributed to the weight of w, u 0 must be at graph distance at most lg w from v. The proof of the observation is finished.
We will also use the following consequence of Chernoff's bound (see, for example, [9] and [1]). (i) If X is a Binomial random variable with expectation µ, and 0 < δ < 1, then and if δ > 0, (ii) If X is a Poisson random variable with expectation µ, and 0 < ε < 1, then and if ε > 0, In particular, for X being a Poisson or a Binomial random variable with expectation µ and for 0 < ε < 1, we have Now we are ready to prove the lower bound. First we concentrate on dense graphs for which, in fact, we show a stronger result that no vertex can acquire large weight a.a.s. Theorem 2.3. Suppose that r = r n ≥ c √ lg n/ lg lg n for some sufficiently large c ∈ R, and consider any acquisition protocol on G(n, r). Then, a.a.s. each vertex in the residual set acquires O((r lg r) 2 ) weight. As a result, a.a.s. a t (G(n, r)) = Ω n (r lg r) 2 .
Proof. Let ℓ = 2 lg r + 2 lg lg r + lg(8π). For a contradiction, suppose that at some point of the process some vertex v acquires weight w ≥ 2 ℓ = 8π(r lg r) 2 . Since one total acquisition move corresponding to transferring all the weight from some neighbour of v onto v, increases the weight on v by a factor of at most 2, we may assume that w < 2 ℓ+1 . It follows from Observation 2.1 that all vertices contributing to the weight of w are at graph distance at most ℓ + 1 from v (and so at Euclidean distance at most (ℓ + 1)r). The desired contradiction will be obtained if no vertex has at least 2 ℓ vertices (including the vertex itself) at Euclidean distance at most (ℓ + 1)r.
The remaining part is a simple consequence of Chernoff's bound and the union bound over all vertices. For a given vertex v, the number of vertices at Euclidean distance at most (ℓ + 1)r is a random variable Y that is stochastically bounded from above by the random variable X ∼ Bin(n − 1, π(ℓ + 1) 2 r 2 /n) with E[X] ∼ πℓ 2 r 2 ∼ 4π(r lg r) 2 . (Note that Y = X if v is at distance at least (ℓ + 1)r from the boundary; otherwise, Y ≤ X.) It follows from Chernoff's bound that provided that c is large enough. The conclusion follows from the union bound over all n vertices of G(n, r).
In order to simplify the proof of the theorem for sparser graphs we will make use of a technique known as de-Poissonization, which has many applications in geometric probability (see [12] for a detailed account of the subject). Here we only sketch it.
Consider the following related model of a random geometric graph. Let V = V ′ , where V ′ is a set obtained as a homogeneous Poisson point process of intensity 1 in [0, √ n] 2 . In other words, V ′ consists of N points in the square [0, √ n] 2 chosen independently and uniformly at random, where N is a Poisson random variable of mean n. Exactly as we did for the model G(n, r), again identifying vertices with their spatial locations, we connect by an edge u and v in V ′ if the Euclidean distance between them is at most r. We denote this new model by P(n, r).
The main advantage of defining V ′ as a Poisson point process is motivated by the following two properties: the number of vertices of V ′ that lie in any region A ⊆ [0, √ n] 2 of area a has a Poisson distribution with mean a, and the number of vertices of V ′ in disjoint regions of [0, √ n] 2 are independently distributed. Moreover, by conditioning P(n, r) upon the event N = n, we recover the original distribution of G(n, r). Therefore, since Pr(N = n) = Θ(1/ √ n), any event holding in P(n, r) with probability at least Now, let us come back to our problem. For sparser graphs we cannot guarantee that no vertex acquires large weight a.a.s. but a lower bound of the same order holds.
Theorem 2.4. Suppose that r = r n ≥ c for some sufficiently large c ∈ R. Then, a.a.s. a t (G(n, r)) = Ω n (r lg r) 2 .
Proof. Since Theorem 2.3 applies to dense graphs, we may assume here that r = squares, each one of side length (20 + o(1))r lg r. Consider the unit circle centered on the center of each square and call it the center circle. We say that a given square is dangerous if the corresponding center circle contains at least one vertex and the total number of vertices contained in the square is less than 1200(r lg r) 2 . Consider any acquisition protocol. First, let us show that at least one vertex from each dangerous square must belong to the residual set. Let u 0 be a vertex inside the corresponding center circle. For a contradiction, suppose that the square has no vertex in the residual set. In particular, it means that u 0 moved the initial weight of 1 it started with onto some vertex outside the square through some path (u 0 , u 1 , . . . , u k ). Note that the Euclidean distance between u 0 and the border of the square (and so also u k ) is at least (20 + o(1))r lg r/2 − 1 ≥ 9r lg r, provided that c is large enough, and so k ≥ 9 lg r.
Consider the vertex u ℓ on this path, where ℓ = ⌊4 lg r⌋ ≥ 3 lg r, provided c is large enough; see Figure 2. Right after u ℓ−1 transferred all the weight onto u ℓ , u ℓ had weight at least 2 ℓ ≥ r 3 > 1200(r lg r) 2 , provided c is large enough. As argued in the proof of the previous theorem, at some point of the process u ℓ must have acquired weight w satisfying 2 ℓ ≤ w < 2 ℓ+1 . Observation 2.1 implies that all vertices contributing to the weight of w are at Euclidean distance at most (ℓ + 1)r from v and so inside the square u 0 u l u k Figure 2. Residual sets contain at least one vertex from each dangerous square.
(as always, provided c is large enough). However, dangerous squares contain less than 1200(r lg r) 2 vertices, and so we get a contradiction. The desired claim holds. Showing that a.a.s. a positive fraction of the squares is dangerous is straightforward. In P(n, r), the probability that the center circle contains no vertex is exp(−π) ≤ 1/3. On the other hand, the number of vertices falling into the square is a Poisson random variable X with expectation µ ∼ 400(r lg r) 2 . By Chernoff's bound applied with ε = e − 1, Hence, we get provided c is large enough. Hence the expected number of dangerous squares is at least (1/3)(1/400 + o(1))n/(r lg r) 2 ≫ lg n → ∞. By Chernoff bounds, with probability at least 1 − o(n −1/2 ), the number of dangerous squares in P(n, r) is at least (1/2500)n/(r lg r) 2 . By the de-Poissonization argument mentioned before this proof, the number of dangerous squares in G(n, r) is a.a.s. also at least (1/2500)n/(r lg r) 2 , and the proof of the theorem is finished.
The only range of r = r n not covered by the two theorems is when r < c for c as in Theorem 2.4. However, in such a situation a.a.s. there are Ω(n) isolated vertices which clearly remain in the residual set. Moreover, if r is such that r lg r > √ n, then the trivial lower bound Ω(1) applies. The lower bound in the main theorem holds for the whole range of r.

Upper Bound
As in the previous section, let us start with a simple, deterministic observation that turns out to be useful in showing an upper bound. Before we state it, let us define a family of rooted trees as follows. LetT 0 be a rooted tree consisting of a single vertex v (the root ofT 0 ). For i ∈ N, we defineT i recursively: the root v ofT i has i children that are roots of treesT 0 ,T 1 , . . . ,T i−1 ; see Figure 3.
Clearly,T i has 2 i vertices and depth i. Moreover, it is straightforward to see that vertices ofT i can move their initial weight of 1 to the root v (in particular, a t (T i ) = 1): Figure 3. The treeT i . indeed, this clearly holds for i = 0 so suppose that it holds inductively up to i − 1. Then, since all children of the root ofT i can send all their accumulated weight to the root ofT i (starting from the smallest subtree), this also holds for i. This, in particular, shows that Observation 2.1 is tight.
As showed in the previous section, the main bottleneck that prevents us from moving a large weight to some vertex in G(n, r) is that there are simply not enough vertices in the Euclidean neighborhood of a vertex. If we want to match the lower bound, then rooted trees induced by the acquisition protocol must be as deep as possible in order to access vertices that are in a Euclidean sense far away from the corresponding roots. It turns out that treesT i from the family we just introduced are efficient from that perspective. However, we cannot guarantee that the vertex set of G(n, r) can be partitioned in such a way that each set has some tree from the family as a spanning subgraph. Fortunately, it is easy to "trim"T i to get a tree on n < 2 i vertices that can shift all of its initial weight to the root.
Observation 3.1. For any d ∈ N ∪ {0} and n ≤ 2 d ,T d contains a rooted sub-tree T on n vertices such that a t (T ) = 1. Moreover, the number of vertices at distance ℓ (0 ≤ ℓ ≤ d) from the root of T is at most d ℓ . Proof. In order to obtain the desired tree T on n vertices, we trimT d by cutting some of its branches (from largest to smallest, level by level). We may assume that n ≥ 2; otherwise, the statement trivially holds.
Since we will be trimming the tree recursively, let us concentrate on v, the root of T d , and d branches attached to it. Our goal is to get a tree rooted at v that has n ≥ 2 vertices. Let k 0 be the largest integer k such that that is, k 0 = ⌊lg n⌋ − 1 (note that k 0 ≥ 0 as n ≥ 2 and that k 0 ≤ d − 1 as n ≤ 2 d ). We leave the branches inducing the treesT 0 ,T 1 , . . . ,T k 0 untouched. We trim the branches inducing the treesT k 0 +2 ,T k 0 +3 , . . . ,T d completely (note that possibly k 0 = d−1 in which case we trim nothing). Finally, we would like to carefully trim the branch inducing the treeT k 0 +1 so that the number of vertices it contains is precisely n − 2 k 0 +1 . If n − 2 k 0 +1 is equal to 0 or 1, then we trim the whole branch or leave just the root of this branch, respectively. Otherwise, we recursively trim the branch as above. It is straightforward to see that all vertices of T can move their initial weight of 1 to the root of T which, in particular, implies that a t (T ) = 1, thus proving the first part.
In order to show the second part, it is enough to prove the desired property for T d (since T is a sub-tree ofT d ). We prove it by (strong) induction on d; clearly, the statement holds for d = 0 and ℓ = 0. Let d 0 ∈ N and suppose inductively that the property holds for all d such that 0 ≤ d ≤ d 0 − 1. The claim clearly holds for ℓ = 0. We count the number of grandchildren at distance ℓ (for any 1 ≤ ℓ ≤ d 0 ) from the root v by considering grandchildren at distance ℓ − 1 from each child of v. By the recursive construction ofT d we get that the number of vertices at distance ℓ from v is ℓ (this equality is well-known and can be easily proven by induction). The proof of the observation is finished.
Before we are ready to state the next result, we need to introduce a few definitions. Let c, ε ∈ (0, 1) be any constants, arbitrarily small. Suppose that we are given a function r = r n such that r lg r ≤ √ n and r ≥ C for some large constant C = C(c, ε) that will be determined soon. Let k = ⌈ √ n/(cr lg r)⌉ and tessellate [0, √ n] into k 2 large squares, each one of side length xr lg r, where x = √ n/(kr lg r). Clearly, c/2 ≤ x ≤ c (the lower bound follows as cr lg r ≤ √ n) and x ∼ c, provided r lg r = o( √ n). Now, let ℓ = 20⌈xr lg r/(20cr)⌉ = 20⌈x lg r/(20c)⌉ and tessellate each large square into ℓ 2 small squares, each one of side length yr, where y = x lg r/ℓ; see Figure 4. Clearly, c/2 ≤ y ≤ c (the lower bound follows assuming that C is large enough which we may) and y ∼ c, provided r = r n → ∞ as n → ∞.  We say that a small square is good if the number of vertices it contains is between (1 − ε)(yr) 2 and (1 + ε)(yr) 2 ; otherwise, it is bad. Moreover, we say that a large square is good if all small squares it contains are good and the following properties hold (otherwise, it is bad): (a) no vertex lies on the border of the large square nor on its two diagonals, (b) no two vertices lie on any line parallel to any base of the large square, (c) no two vertices lie on any line passing the center of the large square. Now we are ready to state the following crucial observation.

Theorem 3.2.
For any pair c, ε ∈ (0, 1) of constants, there exists a constant C = C(c, ε) such that the following two properties hold a.a.s. for G(n, r).
(i) All large squares are good, provided that r ≥ C √ log n. (ii) The number of large squares that are bad is at most n/(r 2 lg 5 r), provided that r ≥ C.
Proof. Properties (a)-(c) on the distribution of the vertices hold with probability 1 for all large squares. Hence, we need to concentrate on showing that small squares are good.
For part (i), consider any small square in G(n, r). The number of vertices in such a square follows a binomial random variable X ∼ Bin(n, (yr) 2 /n) with E[X] = (yn) 2 . It follows immediately from Chernoff's bound that the probability of the square being bad can be estimated as follows: provided that C ≥ √ 6/(cε). Hence, since there are in total O(n) small squares appearing in large squares, the expected number of such small squares is o(1), and the conclusion follows from the first moment method.
For part (ii), consider any small square in P(n, r). As before, let X ∼ P o((yr) 2 ) be the random variable counting the number of vertices in the square. By Chernoff's bound, for the probability of the square being bad we have By a union bound, a given large square is bad with probability at most both inequalities hold provided C is large enough. (Note that ℓ ≤ 20⌈lg r/20⌉ ≤ 2 lg r, provided r ≥ C.) Now, the number of large squares that are bad can be stochastically bounded from above by the random variable Y ∼ Bin(k 2 , 1/ lg 4 r). By part (i), we may assume that, say, r = O(log n) and so, in particular, r lg r = o( √ n). Note that provided C is large enough. On the other hand, note that, say, E[Y ] = Ω(n/ log 3 n). Hence, it follows immediately from Chernoff's bound that By the de-Poissonization argument explained above, the desired property holds for G(n, r) and the proof is finished.
The next deterministic result shows that there exists an acquisition protocol that pushes weights from all vertices of each large good square into a single vertex. Theorem 3.3. Fix c = 1/10000, ε = 1/100, n ∈ N, and radius r = r n ≥ C for some large enough constant C ∈ R. Consider any distribution of vertices that makes a given large square S good (with respect to c, ε, r, and n). Finally, let G be any geometric graph induced by vertices from S with radius r. Then, a t (G) = 1.
Before we prove this theorem, let us state the following corollary that follows immediately from Theorems 3.3 and 3.2.
Corollary 3.4. Suppose that r = r n is such that r lg r ≤ √ n and r ≥ C for some sufficiently large C ∈ R. Then, a.a.s.
Proof. Let c, ε be fixed as in Theorem 3.3 and let C = C(c, ε) be the constant implied by Theorem 3.2. If r ≥ C √ log n, then Theorem 3.2(i) implies that a.a.s. all large squares are good and so by Theorem 3.3 a.a.s.
On the other hand, if r ≥ C, then Theorem 3.2(ii) implies that a.a.s. at most n/(r 2 lg 5 r) large squares are bad. Clearly, each large bad square can be tessellated into O(lg 2 r) squares of side length r/ √ 2, and so the graph G induced by vertices of any large bad square satisfies a t (G) = O(lg 2 r). This time we get that a.a.s. a t (G(n, r)) ≤ k 2 + n r 2 lg 5 r · O(lg 2 r) = O n (r lg r) 2 , and the proof of the corollary is finished.
The only ranges of r = r n not covered by Corollary 3.4 are when r < C for C as in the corollary or when r lg r > √ n. For the first case there is nothing to prove as the bound O(n) trivially holds. The latter case follows immediately by monotonicity of a t (G). Hence, it remains to prove Theorem 3.3.
Proof of Theorem 3.3. Split S into four triangles using the two diagonals of S. (Note that by property (a) of the distribution of the vertices, no vertex lies on the border of any triangle.) By symmetry, we may concentrate on the bottom triangle: the base of the triangle has length ℓ(yr) and the height is ℓ(yr)/2. Since ℓ is divisible by 2, the center of the large square is the corner of four small squares. Clearly, the number of small squares that are completely inside the triangle is ℓ 2 /4 − ℓ/2 (the total area of the triangle is ℓ 2 /4, and there are ℓ small squares only partially contained in this area, contributing a total area of ℓ/2); on the other hand, ℓ 2 /4 + ℓ/2 of them cover the triangle. Hence, since all small squares are good, the number of vertices z that lie in the triangle is at most provided that C is large enough. Similarly, we get that z ≥ z − := (1 − 2ε)(xr lg r) 2 /4. Let d be the smallest integer such that 2 d ≥ z. Since z − ≤ z ≤ z + , it follows that d = lg z + O(1) = 2 lg r + 2 lg lg r + O(1). Observation 3.1 implies that there exists a rooted sub-tree T ofT d on z vertices with a t (T ) = 1. Our goal is to show that T can be embedded on the set of vertices that belong to the triangle with the root being the vertex closest (in Euclidean distance) to the apex of the triangle. If this can be done, then one can merge all the accumulated weights from the four triangles partitioning S into one of them and finish the proof: indeed, as the Euclidean distance from the closest vertex to the apex of the triangle is at most √ 5yr ≤ √ 5cr ≤ r/2, the four roots induce a clique; see Figure 5.
√ 5yr Figure 5. On the left: there is a vertex in the triangle at distance at most √ 5yr from the apex. On the right: in each triangle, we attempt to embed a tree that includes all vertices in the triangle. The four roots induce a clique, and so if such trees can be embedded, all weights in the square can be pushed onto a single vertex.
We divide the triangle into ℓ/20 strips by introducing auxiliary lines A i (i ∈ {0, 1, . . . , ℓ/20}; recall that ℓ is divisible by 20), all of them are lines parallel to the base of the triangle. A 0 is the line that passes through the apex of the triangle, A 1 is at distance 10yr from A 0 , etc., A ℓ/20 coincides with the base of the triangle. Note that there are exactly 10 strips of little squares between any two consecutive auxiliary lines A j−1 and A j . Any two points a 1 , a 2 on the base of the triangle and a line L parallel to the base induce an auxiliary region, a trapezoid with vertices a 1 , a 2 and two vertices on L, the intersection of the line between the apex of the triangle and a 1 with L, and the intersection of the line between the apex of the triangle and a 2 with L, respectively. In particular, the triangle itself is a (degenerate) auxiliary region, induced by the two vertices from the base of the triangle and A 0 .
We will now give a recursive algorithm how to embed the tree T on all z vertices of the triangle. As already mentioned, we pick the vertex closest in Euclidean distance to the apex of the triangle and assign it to the root of T . Let L 0 be any line parallel to the base separating the vertex assigned to the root from other vertices that are not yet assigned to any vertex of T . (Note that by our assumption of the distribution of the vertices, there are no two vertices on any line parallel to the base.) This will be a typical situation that we have to deal with, in a recursive fashion. Suppose thus that we are given a line L i−1 parallel to the base such that vertices above L i−1 are already assigned to vertices in T , and vertices below L i−1 that belong to the auxiliary region Q we currently deal with are not yet assigned to vertices in T . We will always keep the property that Q contains exactly the number of vertices we need to assign to some part of the tree T ; these vertices induce a family of rooted trees in T , with roots that are at graph distance i from the root of T . Denote by Q i and R i the number of vertices that belong to Q and, respectively, to the part of Q above A i ; see Figure 6. Figure 6. The number of vertices in the shaded region is R 3 , the number of vertices in the trapezoid determined by L 2 , the base of the triangle, and the two blue sides of the triangle associated with Q is Q 3 .
Let a 1 and a 2 be the two corners of Q that belong to the base of the triangle. Let b 1 and b 2 be the intersection points of A i with the line going through the apex and a 1 , and with the line going through the apex and a 2 , respectively; see Figure 7. If the Euclidean distance between b 1 and b 2 is more than r/3, then we split Q into two auxiliary regions (the first one induced by b 1 and some b on A i , the other one induced by b and b 2 ; in both situations the auxiliary line L i−1 is used), where b is chosen in such a way that Q i vertices are partitioned into two families of rooted trees in T as evenly as possible. Observe that it is possible to split Q in such a way so that both auxiliary sub-regions contain at least Q i /4 vertices; indeed, one can order the family of rooted trees according to their sizes and then notice that adding one rooted tree to one of the auxiliary sub-regions obtained after splitting can increase the total number of vertices there by a multiplicative factor of at most 2. (Note that by property (c) of the distribution of the vertices, we can perform a split so that no vertex belongs to the border of any resulting auxiliary region.) We stop the algorithm prematurely if the Euclidean distance between b 1 and b (or between b and b 2 ) is less than r/20 or more than r/3 (Error 1 is reported). If everything goes well, we deal with each auxiliary region recursively (we update Q i and R i , and all lines defining the auxiliary region). Now, we want to assign all roots from the family of rooted trees (recall that they are at level i of T ) to vertices of Q above A i . If there are more than R i vertices on level i in T , then stop the algorithm prematurely (Error 2 is reported). In fact, we typically only need to embed a small portion of the vertices of level i, but we nevertheless stop prematurely if R i is smaller than the total number of vertices at level i in the tree. Otherwise, we first assign all roots of the family of rooted trees we deal with. Then, we order the trees rooted at them according to their sizes (in non-decreasing order), and keep adding whole rooted trees, as long as the total number of vertices added is Figure 7. If the Euclidean distance between b 1 and b 2 is more than r/3, we split the region into two regions.
at most R i (see Figure 8). By the same argument as before, we are guaranteed that at least R i /2 vertices are assigned to the corresponding vertices of T . Clearly, if i = ℓ/20, we are able to fit all rooted trees, and so all R i (which is equal to Q i in this case) vertices are dealt with. On the other hand, that is, as long as i < ℓ/20, we introduce any line L i , parallel to the base, that separates vertices of Q that are assigned (that are above the line) from those that are still not assigned to any vertex in T (below the line). (As usual, by property (b) of the distribution of the vertices, we can do it so that no vertex lies on L i .) We continue recursively with the new auxiliary region below L i and the new family of rooted trees consisting of all the branches that are not assigned to any vertices; see Figure 8. Note that the line L i depends on Q, and different auxiliary regions corresponding to embedding vertices of T of the same level might have a different line L i . We will show below that these lines will all be close to A i .  We assign the rest of the vertices in Q by embedding entire branches of the tree, as long as the number of vertices assigned is at most R i (in grey). The remaining branches become roots for the next iteration.
Finally, if at some point of this process two vertices in Q are assigned to two adjacent vertices in T that are at Euclidean distance more than r, then we clearly have to stop the algorithm prematurely (Error 3 is reported).
It remains to argue that we never stop the algorithm prematurely as this implies that T is embedded on the vertices inside the triangle. Let us deal first with Error 2, then with Error 3, leaving Error 1 for the end.
Error 2-level i in T contains more vertices than are available in Q above A i (that is, more than R i ): First, let us observe that for i ∈ {1, 2, . . . , 50}, the auxiliary line A i intersects the triangle so that the Euclidean distance between the two points on the sides of the triangle under consideration intersecting with A i is (20yr)i ≤ (20cr)i = ri/500 ≤ r/3. Hence, splitting of auxiliary regions cannot happen during the first 50 rounds. On the other hand, for i ∈ {1, 2, . . . , 50} we have (20yr)i ≥ (10cr)i = ri/1000. Let us then concentrate on any i ∈ {51, 52, . . . , ℓ/20}. We show, inductively, that when dealing with line A i , the two corresponding points b 1 and b 2 are at distance at least r/20. The claim is true for A 50 as argued above. Suppose then that the claim holds for A i−1 for some i ∈ {51, 52, . . . , ℓ/20}. If Q is split into two auxiliary regions, then the claim holds for A i unless Error 1 is reported. On the other hand, if no splitting is performed, then the Euclidean distance between the two corresponding points can only increase, and so the claim clearly holds for A i . This implies, in particular, that Q contains at least one small square, and thus R i ≥ (1−ε)(yr) 2 ≥ (1−ε)(cr/2) 2 ≥ r 2 10 −9 . On the other hand, since we get from Observation 3.1 that the number of vertices on level i in T is at most d i ≤ 2 lg r + 2 lg lg r + O(1) lg r/10 ≤ 3 lg r lg r/10 ≤ (30e) lg r/10 ≤ 2 7 lg r/10 < r 2 10 −9 , provided that C is large enough. Hence, this error never occurs.
Error 3-two vertices assigned to adjacent vertices in T are at distance more than r: It follows from the definition of L i that for any i ∈ {1, 2, . . . , ℓ/20}, L i lies above the auxiliary line A i (L 0 is exceptional and lies slightly below A 0 ). We are going to argue that L i is relatively close to A i . Claim: For any i ∈ {4, 5, . . . , ℓ/20}, L i lies below the auxiliary line A i−4 .
We will be done once the claim is proved as it implies that we never connect vertices by an edge that are at Euclidean distance more than 50(yr) + r/3 ≤ 50cr + r/3 < r. Indeed, vertices that need to be connected by an edge must lie in the part of Q between L i−1 and A i . The Euclidean distance between L i−1 and A i is at most 50(yr) and the intersection of A i and Q is at most r/3; see Figure 9. Proof of the Claim: For a contradiction, suppose that there exists i such that L i lies above A i−4 and consider the smallest such i. Hence, L i−1 lies below A i−5 . Let Q 1 be the part of Q that lies between L i−1 and L i , and recall that Q 1 contains at least R i /2 vertices. Similarly, let Q 2 be the part between L i−1 and A i and recall that Q 2 contains precisely R i vertices. The fact that the area of Q 1 is at least five times smaller than the one of Q 2 but it contains at least half of vertices will lead us to the desired contradiction. Recall that the length of the intersection of A i−4 with the triangle is s ≥ r/20 − 80(yr) ≥ r/20 − r/125 ≥ r/25. Hence, the number of small squares covering Q 1 is at most 10(u + 2), where u = ⌈s/(yr)⌉ ≥ 400. The number of vertices in Q 1 is then at most 10(u + 2)(1 + ε), and so R i ≤ 20(u + 2)(1 + ε). On the other hand, the number of small squares that are completely contained in Q 2 \ Q 1 is at least Figure 9. All vertices that need to be connected by an edge must be in the part of Q between L i−1 and A i . 40(u − 2), and so R i is at least 40(u − 2)(1 − ε). The contradiction follows, since 20(u + 2)(1 + ε) < 40(u − 2)(1 − ε) for any u ≥ 400.
Error 1-the Euclidean distance between b 1 and b (or b and b 2 ) is either less than r/20 or more than r/3: Suppose that we partition Q containing Q i vertices into Q 1 and Q 2 , where Q 1 is the part of Q induced by b 1 and b, all the way down to A ℓ/20 . Recall that Q 1 contains at least Q i /4 vertices, and that the Euclidean distance between b 1 and b 2 is more than r/3 (since we performed splitting).

< r/3 > r/20
A i−1 Figure 10. On the left: definitions of points and regions used in Error 1. On the right: illustration of the squares in case Error 1 occurs because the Euclidean distance between b 1 and b is less than r/20.
Suppose that Error 1 occurs because the Euclidean distance between b 1 and b is less than r/20. Exactly the same argument can be applied to the case when the Euclidean distance between b and b 2 is less than r/20. Let d 1 , d, and d 2 be the three points of intersection of the line L i−1 with the lines going between the apex of the triangle and b 1 , b and, respectively, b 2 ; see Figure 10. Note that the Euclidean distance between d 1 and d is less than r/20 and, since by the claim L i−1 lies below A i−5 , the Euclidean distance between d 1 and d 2 , denoted by s, satisfies s > r/3 − 100(yr) ≥ 3r/10. As the corresponding triangles are similar, the length of the intersection of each horizontal line between L i−1 and A ℓ/20 inside Q 1 is at most a factor of (r/20)/(r/3) = 3/20 of the total length of the intersection of the line with the triangle. Hence, the area of Q 1 is by a multiplicative factor of at most 3/20 smaller than the area of Q, which will be denoted by A.
Arguing as in the previous error, the area of small squares completely contained in Q is at least A · u−2 u · 10 11 ≥ 0.9A (u = ⌈s/(yr)⌉ ≥ ⌈(3r/10)/(yr)⌉ ≥ 3000). Indeed, since L i−1 might cross small squares, the first row of small squares that intersects Q might be completely lost, giving an additional factor of 10/11 (note that there are at least 10 complete rows between A i−1 and A i ). It follows that Q i ≥ 0.9A(1 − ε) > 0.89A.
Finally, let us note that Error 1 cannot occur because the Euclidean distance between b 1 and b is larger than r/3 (provided that the distances between b 1 and b as well as between b and b 2 are at least r/20). Since we consider the smallest i for which such error occurred, the length of the intersection of A i with Q is at most r/3 + 20(yr) and so the Euclidean distance between b 1 and b is at most r/3 + 20(yr) − r/20 < r/3. The same argument shows that the Euclidean distance between b and b 2 cannot be larger than r/3.

Concluding remarks
The proof of the lower bound can be easily generalized to show that for any fixed dimension d and sufficiently large radius r, a t (G) = Ω(n/(r lg r) d ). For d = 1, it is also easy to get the matching upper bound a t (G) = O(n/(r lg r)). It is natural to conjecture that for d ≥ 3 the proof of the upper bound can also be adapted to show a t (G) = O(n/(r lg r) d ), but in order not to make the paper too technical, we opted for not pursuing further this approach.