Connection intervals in multi-scale infrastructure-augmented dynamic networks

Abstract We consider a hybrid spatial communication system in which mobile nodes can connect to static sinks in a bounded number of intermediate relaying hops. We describe the distribution of the connection intervals of a typical mobile node, i.e., the intervals of uninterrupted connection to the family of sinks. This is achieved in the limit of many hops, sparse sinks and growing time horizons. We identify three regimes reflecting various degrees of sink densities. Namely, (1) a regime of dense sinks, in which the limit is deterministic and given as an expectation with respect to percolation clusters, (2) a regime of sparse sinks, in which the limit depends on a random number of reachable sinks, and (3) an intermediate critical regime.


Introduction
Starting with the landmark paper [5] in the early 1960s by Gilbert, stochastic geometry has been employed to model and analyze spatial communication systems in which the network nodes directly exchange data with other nodes in their vicinity.In the absence of any refined information about the spatial locations of nodes, the null model is that they are scattered entirely at random in space, i.e., form a homogeneous Poisson point process, where the single scalar parameter represents the expected number of vertices per unit volume.Concerning the communication structure, the simplest model is the Gilbert graph where connections between nodes are represented by links between any pair of nodes with a certain maximal distance.
Basic questions about the connectivity of such peer-to-peer networks provide a key motivation for fruitful research in the realm of continuum percolation.At the center of this field stands the percolation phase transition, meaning that, if the intensity of network participants is sufficiently high, then a positive proportion of all nodes form a giant communicating cluster [11] .
However, only a small set of use cases such as sensor networks or disaster-rescue ad-hoc network rely on peer-to-peer networks in its purest sense.In the bulk of applications, peer-to-peer communications appears as an extension for more traditional cellular networks, forming a variety of hybrid systems [7] .Such systems have the potential to successfully mitigate many of the problems of the pure systems, such as for example delay, jitter, routing or operational control.
Another essential aspect is mobility.The vast majority of the available literature on stochastic models for spatial communication networks investigate static systems.However, already in the landmark paper [6] , the impact of mobility on the capacity of communication networks has been evaluated in an information-theoretic context.These findings have inspired subsequent studies of spatial random networks with mobility, and we refer the reader to Diaz et al. and D€ oring et al. [2,3] for an overview in this area.
When designing and evaluating hybrid communication networks, arguably the most basic network characteristic is the total connection time.That is, the overall time that a typical network node is connected to some infrastructure.The key achievement of our earlier work [7] is to describe the asymptotic behavior of this quantity over long time horizons, many hops and sparse infrastructure nodes.However, in real communication networks, maximizing the total connection time does not necessarily lead to networks offering an acceptable quality of service.Indeed, if the connection times are highly fragmented over the entire time horizon, then it is not possible to offer the typical node a large coherent block of uninterrupted service, and the system faces a substantial overhead caused by the cost of frequently reestablishing lost connections to the typical node.Therefore, in the present work, we move beyond the total connection time and provide more refined descriptions of the connection intervals, i.e., the time intervals when the typical mobile node is guaranteed uninterrupted connection.In particular, this distribution can be used to answer questions of the following form.
1. What is the proportion of time that a typical mobile node is guaranteed uninterrupted communication of at least a given time duration?2. What is the number of reconnections of a typical mobile node in the hybrid system?
We stress that also conceptionally, on the level of mathematical proofs, we encounter additional analytical hurdles when proceeding from the total connection time in Hirsch et al. [7] to the connection intervals in the present work.First, we need to deal with more complex correlations between two different time instances.Whereas in Hirsch et al. [7] , for controlling the correlations, it was sufficient to understand the configuration of the network at two specific time instances, this is now no longer true.Indeed, the connection intervals could extend over substantial amounts of time, and we need information on the network during the entire time interval.Second, the connection intervals are a continuous quantity, which requires us to be aware of the network connectivity at uncountably many time instances.Hence, to deal with this problem, we need to introduce additional arguments to bound the error incurred by discretizing time.
The rest of the manuscript is organized as follows.In Section 2, we introduce precisely the hybrid model and state Theorem 2 as the main result on the weak convergence of the connection-interval measure under three different coupled limits.We also present a simulation study to illustrate how the results can be applied for designing wireless networks.In Section 3, we outline the proof of Theorem 2 and establish a number of supporting results that feature several approximations.Finally, Section 4 contains the detailed proofs of the supporting results.

System model
In this manuscript, we study an infrastructure-enhanced model for a wireless communication network in R d , which is observed over a time horizon ½0, T: More precisely, the infrastructure nodes or sinks form a homogeneous Poisson point process Y ¼ Y j f g j!1 with some intensity k S > 0: Furthermore, at time zero, the mobile nodes Xð0Þ ¼ X i ð0Þ È É i!1 form a homogeneous Poisson point process with intensity k > 0: For the mobility, we assume that nodes choose independently random waypoints sequentially according to some isotropic probability measure jðdvÞ and directly jump to them after exponentially distributed waiting times.This defines a mobility model for all times t 2 R, and XðtÞ ¼ X i ðtÞ È É i!1 is a homogeneous Poisson point process with the same intensity for all t 2 R: We assume that the trace of the coordinate-covariance matrix associated with random vectors from jðdvÞ equals d.This normalization will later ensure convergence to a standard Brownian motion.
We work with a simple distance-based connection model, where nodes and sinks communicate directly if they are closer than a certain fixed communication radius.By the scaling properties of the Poisson point process, we may henceforth fix this distance to be equal to 1.Moreover, also the nodes can communicate among themselves at this distance and they may forward data over several relaying hops.Figure 1 (left) illustrates such a network without any sinks.Thus, a k-hop connection with a sink can be established even if the latter is outside the range of direct connectivity.

Connection intervals
In this work, we go beyond the setting of empirical k-hop connection times as studied in Hirsch et al. [7] and investigate the connection intervals.More precisely, we let Iðt, SÞ R denote the length of the connected component of a set S R containing a time point t 2 R: In symbols, where Iðt, SÞ :¼ 0 if t 6 2 S: We will be interested in the specific case, where S represents the set of all times, where a typical moving node X 0 ðÁÞ, i.e., an additional mobile node started at the origin and moving independently via the same mobility scheme, is k-hop connected to some sink.Hence, if U i f g i!1 are iid exponentially distributed waiting times, and V i f g i!1 are iid jump vectors distributed according to jðdvÞ, then we can express the position of the typical vertex at time t as X 0 ðtÞ ¼ X 0 ð0Þ þ P i: P j i U j t V i : Note that we may add this extra point, due to the Slivnyak theorem, and after applying mobility to this point as well as to the other points, it remains typical at every instant.More precisely, denotes the set of all times when the typical node connects to the sink Y j in at most k hops using the nodes in X.Then, N k :¼ is the set where it connects to some sink.In particular, this definition allows for hand-overs between different sinks.Writing d for the Dirac measure, the central objective of this work is the k-hop connection-interval measure which is a random probability measure on ½0, 1Þ Â ½0, 1: Here, we equip the space of probability measures on ½0, 1Þ Â ½0, 1 with the Borel sigma-algebra of weak convergence.To see that the connection interval measure is a welldefined random measure, we note that for any measurable A ½0, 1Þ, B ½0, 1, we can represent s T ðA, BÞ as T À1 P i2Z jI i \ TBj1 jI i j 2 A f g, where I i f g i2Z is the sequence of connection intervals defined by N k : In order to highlight the richness of information encoded in the connection interval measure, we now describe four key network statistics derived from it.
Example 1 (Network statistics).Consider averaged connection time of the typical node.Hence, we recover the setting from Hirsch et al. [7] .2. f 2 ð', tÞ : ¼ ': As elucidated in Section 1, connection times cannot be used effectively if they are highly fragmented over the time horizon.One approach is to use f ð', tÞ ¼ 1 ' !t min f g, i.e., to discard connection times contained in intervals shorter than a minimum duration t min : However, this hard threshold might cause undesirable threshold phenomena tied to the specific choice of t min : Hence, it may be more desirable to rely on a soft weighting of the form f 2 ð', tÞ ¼ ', where connection times in longer intervals receive a higher weight.Then, the network characteristic s T ðf Þ takes into account that a network should not only offer a high amount of connectivity on average but also guarantee connections that are uninterrupted for a substantial time.3. f 3 ð', tÞ : ¼ 1 ' > 0 f g=': Introducing the ' in the denominator makes Þds a weighted version of the connection-interval lengths measure.Indeed, since s7 !Iðs, N k Þ is constant on each uninterrupted connection interval, any such interval that is entirely contained in ½0, T, contributes with a value of 1 to Ts T ðf 3 Þ: Note that the first and last connection interval intersecting ½0, T are possibly counted only partially.Hence, Ts T ðf 3 Þ can be interpreted as the number of reconnections that are needed within the time horizon.For network operators this characteristic is of high interest since each such reconnection requires additional resources.4. f 4 ð', tÞ : ¼ f ðTtÞ: Then, Ts T ðf 4 Þ ¼ Ð ½0,T\N k f ðsÞds can be used to analyze fine-properties of a uniformly chosen time within the set of connection times.

Asymptotic k-hop connection-interval measure
We analyze the connection measure s T asymptotically over a growing time horizon T ! 1 when simultaneously the admissible number of hops k and the sink intensity k S scale with T.More precisely, denotes the critical intensity for percolation of the node process.For k > k c we write C 1 ¼ C k 1 for the unique connected component in a Poisson point process with intensity k: We will rely on a key finding from continuum first-passage percolation, namely that above k c , the number of hops needed to travel inside the infinite connected component of nodes grows linearly in the Euclidean distance, see Yao et al. [14] .More precisely, there exists a stretch factor l > 0 such that almost surely Tðx, yÞ jx À yj !jxÀyj"1 l, where T(x, y) denotes the smallest number of hops that are needed to connect q(x) and q(y), the points in C 1 that are closest to x and y in Euclidean distance.Henceforth, we assume that k " 1 and k S # 0 such that for some fixed n S > 0, where B r ðxÞ denotes the Euclidean ball of radius r > 0 centered at x 2 R d : Note that (1) may be interpreted as the expected number of sinks that are within k-hop range of a typical node at the origin.Indeed, since the stretch factor converts the k-hop distance to the Euclidean distance, any such sink is contained in the ball B k=l ðoÞ: Thus, the expected number of sinks is k S jB k=l ðoÞj: To work out the averaging potential of mobility cleanly, we focus on the long-time asymptotics, i.e., limit statements for s T ðd', dtÞ, as T tends to infinity.As a first step, one could let k and k S be fixed and develop an ergodicity argument to show that the long-time average would coincide with the expected value of the connection interval at time point 0. However, this expected value would still depend sensitively on both parameters k and k S in a complex manner, and therefore, for engineers it is difficult to extract any tangible insights from such a statement.
In contrast, for many applications such as file transfer or messaging, the number of admissible hops can be very large, corresponding to a setting where k !1: For such systems, it is important to understand how they behave over a long time horizon, i.e., as T !1: This motivates the mathematical investigation of limiting regimes, where both k and T tend to infinity, which is the topic of our studies.We stress that real systems will never have an infinite number of admissible hops or are considered over an infinite time horizon.However, for engineers our results are still valuable since they provide rigorous approximations for the behavior of networks, where the admissible hops and the time horizon are both finite but very large.
More precisely, we need to take into account the sink densities in relation to the considered time horizon and investigate scalings of the form for some parameter a > 0 governing the sink density.In particular, this scaling, together with the scaling (1), encodes a joint limit in which k tends to infinity together with T, leading to limiting objects that can be described in terms of percolation clusters.
In Theorem 2, we identify the connection measure after the scaling.To that end, let N 1 :¼ t : X 0 ðtÞ↭ t 1 È É be the set of all times when the typical node is part of the infinite component of nodes.Similarly, N 1 o :¼ t : o↭ t 1 f gdenotes the set of all times when the static origin is part of the infinite component of nodes.Further we write Y 0 ðAÞ for the number of points of a point process Y 0 in the measurable set A & R d : Theorem 2 (Asymptotic weighted k-hop connection-interval measure).Let k > k c and assume the multi-scale regime encoded by ( 1) and (2).
Dense sinks.If a < d=2, then, as T ! 1, where o ÞÞ with N an independent Poisson random variable with intensity n S and ðN j,1 o Þ j!1 iid copies of N 1 o : Sparse sinks.If a > d=2, then, as T " 1, with all definitions as in the dense case.
Critical density.If a ¼ d=2, then, as T " 1, where n 0 S :¼ ðn S =jB 1 ðoÞjÞ 1=d and Y 0 is a unit-intensity homogeneous Poisson point process and W t is a standard Brownian motion.
We note that convergence of the distribution of random measures is defined by the convergence of integrals with respect to bounded continuous test functions.Hence, the statistics described in items ( 2) and ( 3) of Example 1 should be truncated at some large value M > 0. In practice, when relying on such statistics as metrics for the network performance, the truncation at large M is of little concern.
We conclude this section by expounding on how the asymptotic results presented in Theorem 2 can be applied in the design and analysis of wireless networks.To that end, we concentrate on the dense case and present a simulation study where we describe the dependence of E½f ðI o ðNÞÞ on the expected number of in-range sinks n S for the three characteristics f 1 , f 2 , f 3 discussed in Example 1.Before giving the precise parameters, we now clarify the purpose of the simulation study.We note that computing network characteristics s T ðf Þ directly via Monte Carlo simulation would be highly time-consuming, both in terms of implementation as well as run time costs.Indeed, to implement the model, we need to simulate one mobile and one static point process, and we need to track the precise intervals when the typical moving node connects to one of the infrastructure nodes.This can be highly time consuming, especially if the network is large and the simulation is over a long time horizon.With our simulation study, we illustrate how our main result allows us to reduce these simulation efforts dramatically, since we no longer need to simulate the infrastructure, and also only need to consider a fixed time horizon.
More precisely, the network nodes form a homogeneous Poisson point process with intensity k ¼ 150 in a 5 Â 5-sampling window, thus giving rise to a communication network with an expected number of 3,750 nodes and the communication radius is set to 0.1.Although not needed for the computation of E½f ðI o ðNÞÞ, for completeness, we note that the critical intensity for percolation is k c % 143:7 [12] and that the stretch factor is l % 8:1 (own simulations).Observe that on the right-hand side in Theorem 2 the parameters k S , k, T are sent to the limit and therefore do not appear in the simulation.Figure 1 (left) shows a realization of this system.
Each node jumps according to a sequence of rate 1 exponential waiting times, and the jump locations are selected uniformly at random at distance 0.005.Figure 1 (right) illustrates the changes of the network topology under these dynamics.Now, we generate 1,000 realizations of this model and evaluate the quantity E½f ðI o ðNÞÞ appearing in Theorem 2 for the three different choices of the test function.Figure 2 illustrates the Monte Carlo estimates for E½f i ðI o ðNÞÞ as a function of E½N ¼ n S , the expected number of sinks in range.
Both f 1 and f 2 lead to increasing functions in n S approaching a saturation level as n S becomes large.We note that with respect to f 1 the system is already close to a saturation value of approximately 0.6 for n S !2: That is, the typical node is connected approximately 60% of the time.We note that this saturation value corresponds to the percolation probability, see for example Meester and Roy [11] , at the considered intensity: if the typical node cannot form a connection to other devices outside a local neighborhood, then increasing the sink density does not help improving the total connectivity.
Concerning f 2 , increasing n S may improve the system quality substantially, even beyond n S !2: We hypothesize that this is due to the following effect.Although after n S ! 2, adding further sinks may not improve the total connection time substantially, still a few additional sinks may be enough to merge several smaller connection intervals into a very long one.This will boost the weighted connection time massively.
Although the function f 3 ð', tÞ is neither increasing nor decreasing in ', Figure 2 still illustrates that it increases rapidly for small values of n S before reaching a plateau at around 0.075 when n S !2: Loosely speaking, this means that if the typical node performs a jump on average once every second, then we see 0.075 reconnections per second.Our interpretation is as follows: for low n S , the number of reconnections is small simply because the typical device is in any event disconnected for most of the time.Then, as we increase n S , the number of chances for the typical device to connect surges rapidly.Conceptually there is also a tendency in the opposite direction since an increase of n S eliminates some reconnections by merging two smaller intervals into a larger one.However, the plot suggests that the effect coming from the addition of new intervals is much stronger.
As a global conclusion of the considered simulation study, we may say that increasing n S past a very high value does not lead to further connectivity gains.However, we stress that the impact of increasing the number of admissible hops k is a bit more subtle.Then, also N 1 becomes larger so that implementing this measure may boost connectivity even in situations where increasing n S does not help.

Outline
The goal of Theorem 2 is to prove that the random measure s T converges in distribution to a suitable random measure K on ½0, 1Þ Â ½0, 1: As a preliminary step, we reduce this task to proving this convergence when integrating with respect to product-form test functions.More precisely, we consider integrals of the form for continuous test functions g : ½0, 1Þ !½0, 1 and h : ½0, 1 !½0, 1: The following auxiliary result then shows that it suffices to prove that the random variables s T ðghÞ converge to KðghÞ in distribution as T " 1: Lemma 3 (Product-form test functions).Let K be a random probability measure on ½0, 1Þ Â ½0, 1 such that s T ðghÞ converges in distribution to KðghÞ for every continuous g : ½0, 1Þ !½0, 1 and h : ½0, 1 !½0, 1.Then, the random measure s T converges weakly in distribution to the random measure K.
We present the proof of Lemma 3 at the end of this section.The next preparatory step, is to show that we may replace the original expression gðIðtT, N k ÞÞ by a finite-range approximation.More precisely, this involves three steps.First, we truncate long connection intervals.That is, we replace IðtT, N k Þ by IðtT, N k ÞÙM for a large truncation level M > 0. Second, we introduce d-discretizations of IðtT, N k Þ, where we only check the connectivity at discretized time points.For this, we put where we rely on the finitely-many discrete time points J d,M ðtÞ :¼ t þ ÀdM=ded, :::, Àd, 0, d, :::, dM=ded f g around t. Third, we replace the actual k-hop connection event by the event of percolation beyond bounded neighborhoods.To that end, we denote by Q L ðxÞ a box with sidelength 2L > 0 centered at x 2 R d , and we write for the family of all locations percolating beyond an L-neighborhood via nodes in X(t).
First, in Section 4.1, we show that passing to the approximations incurs a negligible approximation error in the sense of Proposition 4 below.To make this precise, we let Yðt, kÞ :¼ Yðt, k, X 0 Þ :¼ B k=l ðX 0 ðtÞÞ \ Y denote the family of all relevant sinks at time t.Then, we write s 2 N L ðYðt, kÞÞ if and only if X 0 ðsÞ 2 C L ðsÞ and Y j 2 C L ðsÞ for some sink Y j 2 Yðt, kÞ: Moreover, for L, M, d > 0, we set I k,L,d,M ðtTÞ :¼ I d,M ðtT, N L ðYðtT, kÞÞÞ: Proposition 4 (Finite-range approximation of connection intervals).Let t 1.Then, After integrating with respect to the functions g and h and inserting the approximations, the first key step is to prove the following finite-range variant of Theorem 2.
Dense sinks.If a < d=2, then, for all L, M, d > 0, as T " 1, where Here, N is an independent Poisson random variable with intensity n S and the C ðjÞ L are iid copies of C L : Sparse sinks.If a > d=2, then, for all L, M, d > 0, as T " 1, with all definitions as in the dense case.
Critical density.If a ¼ d=2, then, for all L, M, d > 0, as T " 1, where n 0 S :¼ ðn S =jB 1 ðoÞjÞ 1=d and Y 0 is a unit-intensity homogeneous Poisson point process and W t is a standard Brownian motion.
We now explain how to deduce Theorem 2 from Propositions 4 and 5.
Proof of Theorem 2. We explain how to argue for a > d=2, noting that the other two cases are similar.Let F : ½0, 1 !½0, 1 be Lipschitz with Lipschitz constant 1.We want to show that lim To that end, we consider the decomposition , where in the last inequality we used that the model is time-stationary.By Proposition 5, the first expression on the right-hand side tends to 0 as T !1: Since g is continuous, Proposition 4 allows us to choose M, L, d > 0 such that the second expression becomes arbitrarily small as T !1: Finally, a similar argument applies to the third contribution, thereby concluding the proof. w The key step in the proof of Proposition 5 is the conditional secondmoment method.Hence, we need to control conditional expectations and covariances for expressions like gðI k,L,d,M ðtTÞÞ, t 1: Proposition 6 (Asymptotic conditional decorrelation).Let 0 < t < 1.Then, under the scalings (1) and (2), lim To describe concisely the asymptotic conditional expectation given X 0 and Y, we need a more explicit representation of I k,L,d,M : By definition, I k,L,d,M ðtTÞ, is determined by certain finite-range percolation events of the typical node X 0 , and of sinks in range Y(tT, k) at the times J d,M ðtTÞ: More precisely, to make this dependence explicit, we will also write We prove Proposition 6 and 7 in Section 4.2.After that, we will elucidate in Sections 4.3-4.5 below how to complete the proof of Proposition 5 in the different regimes for the parameter a: We conclude this section with a proof of Lemma 3.
Proof of Lemma 3. We follow the proof of Kallenberg [8] in Theorem 4.11.First, we show the tightness of the random probability measures s T : To that end, we note that by assumption, the random variable s T ðghÞ converges to KðghÞ in distribution for every g, h as above.Thus, also lim T"1 E½s T ðghÞ ¼ E½KðghÞ for every g, h as above.Since the underlying space ½0, 1Þ Â ½0, 1 is a product space, we deduce from Kallenberg [8] in Theorem 4.1 that the expectation measure E½s T converges weakly to E½K: Thus, by Prohorov's theorem (see Kallenberg [8] in Theorem 4.2), lim M"1 sup T!1 E½s T ð½M, 1Þ Â ½0, 1Þ ¼ 0: By Kallenberg [8] in Theorem 4.10, this yields tightness of the random measures s T f g T!1 : Now, we conclude as in the proof of Kallenberg [8] in Theorem 4.11.

Proofs
In Section 4.1, we establish the finite-range approximation of the connection intervals from Proposition 4. In Section 4.2, we establish the asymptotic conditional expectations and covariances statements of the Propositions 6 and 7.
Finally, we show how to approximate the k-connection event by the percolation outside finite boxes.
Lemma 10 (Finite-range percolation).Let M, d > 0.Then, under (1, 2), lim In the rest of this section, we prove Lemmas 8-10.We start with Lemma 10 since it follows from a short argument based on the shape theorem for continuum first-passage percolation.The latter allows to replace the k-hop connection event by suitable percolation events.
Proof of Lemma 10.First, note that lim k"1 PðYðt, kÞ ¼ Yð0, kÞ for all t 2 ½ÀM, MÞ ¼ 1 since we are confined to a finite time window.Thus, where t 2 N Ã ðX 0 , Yð0, kÞÞ if X 0 ðtÞ 2 C 1 ðtÞ and Y j 2 C 1 ðtÞ for some sink Y j 2 Yðt, kÞ: Indeed, since we consider only finitely many time points in I d,M , for the statement (11) to hold, it suffices to check that lim k"1 Pð X 0 ðtTÞ 2 N k È É D tT 2 N Ã ðX 0 , Yð0, kÞÞ È É Þ ¼ 0 for any fixed t 2 ½0, 1: This is precisely the shape theorem for continuum first-passage percolation in the form of Hirsch et al. [7] in Lemma 15, where, under the same scalings, it is shown that asymptotically, being k-hop connected is equivalent to being part of the infinite cluster and finding a reachable sink that is also part of the infinite cluster.Now, by the uniqueness of the connected component, X 0 ðtÞ 2 C 1 ðtÞ if and only if X 0 ðtÞ 2 C L ðtÞ for every L ! 1 and similarly Y j 2 C 1 ðtÞ if Y j 2 C L ðtÞ for every L ! 1: Hence, we may replace N Ã ðX 0 , Yð0, kÞÞ by N L ðYð0, kÞÞ, thereby concluding the proof.
w Next, we prove Lemma 8.As a pivotal observation, we note since perform a random walk, the node movement is diffuse in the sense that it is highly unlikely that after a time of order T, the typical node is contained in a set of bounded diameter.
Lemma 11 (Diffuseness of node locations).Let t, L > 0.Then, under (1) and ( 2 Proof.Fix e > 0 and note that we may assume L > 1.First, by the central limit theorem, X 0 ðtTÞ= ffiffiffi ffi T p converges in distribution to a Gaussian vector Z.Then, for K ! 1 partition the box The key observation for the tightness assertion in Lemma 8 is to use that a large connection interval means that the typical node needs to be in the unbounded connected component for many temporally distant time steps.
Proof of Lemma 8. We show tightness of the connection interval I þ ð0, N k Þ :¼ Ið0, N k \ ½0, 1ÞÞ for positive connection times.The arguments for negative connection times are symmetric.
Let e > 0: Then, by discretization, for any n 0 , t 0 ! 1, and there are no sinks in an L 0 -neighborhood around X 0 ðnt 0 Þ, i.e., if Y \ Q L 0 ðX 0 ðnt 0 ÞÞ ¼ ;, then it is possible to percolate beyond that neighborhood, i.e., X 0 ðnt 0 Þ 2 C L 0 ðnt 0 Þ: Thus, it suffices to produce t 0 ! 1 such that To that end, we let denote the family of nodes that are contained in the L 0 of the typical node at time nt but not at any other discretized times.We also introduce C À L ðtÞ in the same way as C L ðtÞ with the only difference that percolation exclusively relies on nodes in X À,t : In particular, by the independence property of the Poisson point process, when conditioned on X 0 , Now, the difference on the left-hand side is bounded above by which tends to 0 as t 0 " 1 by Lemma 11. w The main idea in the proof of Lemma 9 is that for small d, only very few nodes move within the time interval ½id, ði þ 1Þd, and therefore we do not need to rely on them when establishing k-hop connections.
In order to make this more precise, note that by the thinning theorem, the intensity of nodes that are moving in an interval of the form ½i 0 d 0 , ði 0 þ 1Þd 0 from a Poisson point process X i 0 ,d 0 with intensity ð1 À e Àd 0 Þk, see Last and Penrose [9] in Theorem 5.8.In particular, for sufficiently small d 0 , this process is still in the super-critical phase and we let C d 0 ,1 ði 0 d 0 Þ denote the associated unique unbounded connected component for continuum percolation.We write E glob k,d 0 for the event that all pairs of nodes X j 2 C d 0 ,1 ði 0 d 0 Þ \ B k 1=ð2dÞ ðoÞ and X j 0 2 C d 0 ,1 ði 0 d 0 Þ \ B ð1Àd 0 Þk=l ðoÞ are connected in at most k À ffiffi ffi k p hops for every i 0 2 Z with ji 0 j M=d 0 : Recall that B r ðxÞ denotes the Euclidean ball of radius r > 0 centered at x 2 R d and o denotes the origin.The first step in the proof of Lemma 9 is to show that these global connections occur with a high probability provided that d 0 is sufficiently small.
In order to complete the global paths, encoded in the event E glob k,d 0 , to khop connections, we still require short local paths leading up to the unbounded connected components.To that end, we will need to refine the above d 0 -discretization and will rely on variants of N k that are locally determined.More precisely, for L > 0 we write t 2 N L,d 0 if and if X 0 connects at time t to some node of C d 0 ,1 ðbt=d 0 cd 0 Þ \ Q L ðX 0 ðtÞÞ in at most L 2d hops.Then, an essential step is to show that these local paths connect to the global path with high probability.To that end, we couple L to the size of a finer discretization by putting L d :¼ d À1=ð2dÞ : Now, we define as the event that there are both X 0 ðidÞ and X 0 ðði þ 1ÞdÞ percolate beyond an Proof of Lemma 9.The task is to show that lim where is the event of having k-connections at id and ði þ 1Þd but not within the entire interval ½id, ði þ 1Þd: Note that if id 2 N k , then X 0 ðidÞ 2 C L d ðidÞ unless some sink lies in Q L d ðX 0 ðidÞÞ: However, since the sink intensity tends to 0 as k " 1 so does the probability of the latter event.Moreover, by Lemma 13, lim sup Noting that a similar argument applies when replacing X 0 ðidÞ by one of the sink nodes in B k=l ðX 0 ðidÞÞ thus concludes the proof of identity (12).w Finally, we prove Lemmas 12 and 13.
Proof of Lemma 12.The key ingredient in the proof is the continuity of the stretch factor with respect to the intensity k of the underlying Poisson point process.While in first-passage percolation on the lattice, results in this vein are classical and hold under very general conditions [1] , the question in the continuum only requires a small adaptation.We only show that lim sup k 0 "k l k 0 l k since this will be sufficient for the proof of Lemma 12. Once this assertion is established, we invoke the shape theorem for continuum first-passage percolation in the form of Hirsch et al. [7] in Lemma 15 (see our explanations below (11)) in order to deduce that lim k"1 PðE glob k,d 0 Þ ¼ 1 for sufficiently small d 0 .
To that end, set l k,n :¼ n À1 E½T ðq k ðoÞ, q k ðne 1 where q k ðxÞ is the point in C k 1 that is at smallest Euclidean distance to x 2 R d and T k ðx, yÞ denotes the graph distance between points x, y 2 C k 1 : Then, T k 0 ðq k 0 ðoÞ, q k 0 ðne 1 ÞÞ converges almost surely to T k ðq k ðoÞ, q k ðne 1 ÞÞ as k 0 " k: Hence, fixing some super-critical k 0 , it suffices to show that the path lengths T k 0 ðq k 0 ðoÞ, q k 0 ðne 1 ÞÞ are uniformly integrable as k 0 2 ½k 0 , k: To achieve this goal, we note that, when we realize the Poisson-Gilbert graph for both, k 0 and k 0 , on the same probability space, using the usual coupling via iid thinnings, then, by the triangle inequality and the fact that fewer nodes lead to more steps, Hence, by stationarity, it suffices to establish the uniform integrability of T k 0 ðq k 0 ðoÞ, q k 0 ðoÞÞ for k 0 2 ½k 0 , k: For M > 0, say that the box then q k 0 ðoÞ and q k 0 ðoÞ are connected by a path in Q M so that T k 0 ðq k 0 ðoÞ, q k 0 ðoÞÞ X k ðQ M Þ: Thus, for any a > 0, PðT k 0 ðq k 0 ðoÞ, q k 0 ðoÞÞ > aÞ PðX k ðQ a 1=ð2dÞ Þ > aÞ þ PðQ a 1=ð2dÞ is not a 1=ð2dÞ À goodÞ: Now, we conclude from the quantitative uniqueness in the form of Penrose and Pisztora [13] in Theorem 2 that Q M is M-good with high probability.In words, Penrose and Pisztora [13] in Theorem 2 state that except in an event of exponentially small probability, there is a unique volume-order connected component in the supercritical Gilbert graph.
Hence, the right-hand side becomes arbitrarily small for large a, thereby concluding the proof of the uniform integrability.
w The proof of Lemma 13 relies heavily on the observation that it is highly unlikely to see a time interval of a small length d where two or more nodes are moving.To make this precise, we let be the event that for any t 2 ½id, ði þ 1Þd the configuration of XðtÞ \ Q L ðX 0 ðtÞÞ coincides with Recall the Landau notations oðdÞ and OðdÞ: Proof.First, the probability that the typical node X 0 jumps at least twice in ½id, ði þ 1Þd is 1 À e Àd À de Àd 2 Oðd 2 Þ: Hence, we may assume that X 0 ðtÞ 2 X 0 ðidÞ, X 0 ðði þ 1ÞdÞ È É for all t 2 ½id, ði þ 1Þd: We now distinguish between the cases whether or not the typical node moves in the interval ½id, ði þ 1Þd: X 0 ðidÞ 6 ¼ X 0 ðði þ 1ÞdÞ: First, note that PðX 0 ðidÞ 6 ¼ X 0 ðði þ 1ÞdÞÞ ¼ 1 À e Àd 2 OðdÞ: Hence, it suffices to show that conditioned on X 0 , the probability that either of XðtÞ \ Q L d ðX 0 ðidÞÞ or XðtÞ \ Q L d ðX 0 ðði þ 1ÞdÞÞ changes within the time interval t 2 ½id, ði þ 1Þd is of order o (1) as d # 0: By time reversibility, it suffices to consider the case id; by independence of X 0 and X, we may assume that X 0 ðidÞ ¼ o: The expected number of nodes of XðidÞ \ Q L d ðoÞ moving in the time interval ½id, ði þ 1Þd is at most L d d ð1 À e Àd Þk, and therefore of order o (1).In order to bound the number of nodes entering Q L d ðoÞ from the outside, we apply the mass-transport principle [4,10] .More precisely, for z, z 0 2 Z d , we let Sðz, z 0 Þ denote the total number of visits in Q L d ðL d z 0 Þ within the time interval ½id, ði þ 1Þd of nodes that are contained in Q L d ðL d zÞ at time id: Then, the number of entering nodes is bounded above by the incoming mass at the origin.Conversely, the expected outgoing mass at the node o is at most Since the mass-transport principle implies that the expected incoming mass equals the expected outgoing mass, we conclude the proof in the case X 0 ðidÞ 6 ¼ X 0 ðði þ 1ÞdÞ: X 0 ðidÞ ¼ X 0 ðði þ 1ÞdÞ: In this case, we need to show that the probability that there is more than one change of XðtÞ \ Q L d ðX 0 ðidÞÞ in the time interval ½id, ði þ 1Þd is of order oðdÞ: If there is more than one change, then this may be either because several nodes move or because some node moves multiple times.
First, consider the situation involving multiple nodes.Similarly, as in the previous case, conditioned on X 0 , the collection of nodes in XðidÞ \ Q L d ðX 0 ðidÞÞ moving in ½id, ði þ 1Þd is a Poisson point process with intensity 1 À e Àd : Hence, the probability of more than one of those nodes moves is of order Oðd 2 Þ: Next, conditioned on X 0 , the nodes entering Q L d ðX 0 ðidÞÞ form a Poisson point process.Again, using the mass-transport principle the intensity of this process is of order at most OðL d d dÞ: Hence, the probability of seeing more than one entering node is of order at most OðL 2d d d 2 Þ and therefore in oðdÞ: Finally, it may happen that in the interval ½id, ði þ 1Þd at least one node from XðidÞ \ Q L d ðX 0 ðidÞÞ moves, and at least one node from XðidÞ n Q L d ðX 0 ðidÞÞ enters Q L d ðX 0 ðidÞÞ: Conditioned on X 0 , these two processes are independent, and by the arguments derived above there intensities are of order at most OðL d d ð1 À e Àd ÞÞ and OðL d d dÞ, respectively.the asserted probability is of order at most OðL 2d d ð1 À e Àd ÞdÞ, as asserted.
Second, consider the situation of some nodes moving multiple times.Here, we again apply the mass-transport principle similarly to the setting where X 0 ðidÞ 6 ¼ X 0 ðði þ 1ÞdÞ: More precisely, for z, z 0 2 Z d , we now let S 0 ðz, z 0 Þ denote the total number of visits in Q L d ðL d z 0 Þ within the time interval ½id, ði þ 1Þd of nodes that are contained in Q L d ðL d zÞ at time id and that jump at least twice in the interval ½id, ði þ 1Þd: Then, the number of relevant nodes moving multiple times is bounded above by the incoming mass at the origin.Conversely, the expected outgoing mass at the node o is of the order at most OðL 2d d d 2 Þ 2 oðdÞ: Hence, another application of the masstransport principle concludes the proof.
w Proof of Lemma 13.For d > 0 with d 0 =d 2 Z and jij M=d let i 0 ¼ i 0 ði, dÞ be such that ½id, ði þ 1Þd ½i 0 d 0 , ði 0 þ 1Þd 0 : A key ingredient is the strong quantitative uniqueness in the form of Penrose and Pisztora [13] in Theorem 2: For d > 0 and i 2 Z write E uniq i,d for the event that for every j 2 i, i þ 1 f g, it holds that Then, Penrose and Pisztora [13] in Theorem 2 provides a c > 0 such that 1 À PðE uniq i,d Þ exp ðÀcL d Þ: Again by stationarity, the latter probability does not depend on i.
We now claim that E loc i,d cannot occur under the event which will conclude the proof of the lemma since each of the probabilities and let t 2 ½id, ði þ 1Þd arbitrary.In particular, under the event E uniq i,d , we conclude that X 0 ðjdÞ connects within Q L d ðX 0 ðjdÞÞ to a vertex in this means that t 2 N L d ,d 0 : Hence, the event E loc i,d does not occur.The key to achieving asymptotic independence will be the diffuseness of node movement.More precisely, it is highly unlikely to find a node contained in two specific neighborhoods at two distant points in time.
Lemma 15 (Asymptotic decorrelation of node locations).Let d, M, K, t > 0.Then, where the suprema run over all measurable [0, 1]-valued functions h and all K-element subsets u, u 0 R d : Before establishing Lemma 15, we elucidate how it enters the proof of Proposition 6.
Proof of Proposition 6.First, by choosing K ! 1 sufficiently large, we may assume that both Y(0, k) and Y(tT, k) have at most K elements.More precisely, we replace After constraining the number of elements of Y(0, k) and Y(tT, k), they become eligible choices for u and u 0 in Lemma 15.Thus, that result allows us to conclude the proof.
w Next, we establish the asymptotic representation of the conditional expectation from Proposition 7. The proof idea is to use that the sinks are so sparse that no moving node can visit neighborhoods of two distinct sinks.
Proof of Proposition 7. First, by stationarity, we may assume that t ¼ 0. Next, we let denote the high-probability event that the typical node does not move further than ffiffi ffi k p within the times s 2 J d,M ð0Þ: Moreover, we note that the high-probability event implies the disjointness of the L-neighborhoods relevant for the percolation events encoded in s 2 N L ðYðtT, kÞÞ: Also the events E disj k occur whp.s 2 J d,M ð0Þ, we now let denote the family of all nodes that move a distance further than ffiffi ffi k p =4 within the times.By the thinning theorem [9] (Corollary 5.9), X exc is a homogeneous Poisson point process with a vanishing intensity as k !1: Next, we define the percolation sets C À L ðsÞ precisely as C L ðsÞ except that the connections are only formed through nodes in X n X exc instead of X.Thus, by relying on these modified percolation sets, we can introduce the sets N À,L ðYð0, kÞÞ: Now, when conditioning on X 0 and Y, then under the event E disj k \ E inert k , by the independence property of the Poisson point process on disjoint sets, the random vectors Z and ZðY i Þ :¼ , Y i 2 Yð0, kÞ, are independent.Therefore, under , Yð0, kÞÞ, where Proof of Lemma 15.By the thinning theorem, the family forms a Poisson point process.Moreover, Þ : Furthermore, by the independence property of the Poisson point process, Therefore, it suffices to show that lim T!1 sup u,u 0 PðX 0,u \ X tT,u0 6 ¼ ;Þ ¼ 0: Writing u and u 0 as finite sets of points, the claim reduces to lim h But now Lemma 11 gives that lim t!1 sup x 00 2R d PðX 0 ðtÞ 2 Q K ðx 00 ÞÞ ¼ 0 so that an application of Palm calculus concludes the proof.As announced in Section 3, the key step to prove Proposition 5 is the second moment method.To carry out this program, we need decorrelation of the connection intervals at distant time points.Whereas Proposition 6 provides such a decorrelation property in a conditional setting, in the dense regime a < d=2 this decorrelation needs to be strengthened into an unconditional result.This will be achieved by applying the law of total covariances, i.e., for any square-integrable random variables X, X 0 and r-algebra F : Proof of Proposition 5. a < d=2.Let t 1 be arbitrary.We want to show that lim To achieve this goal, we will apply two times the law of total covariance.First, by stationarity, similarly as in (10), we can express the conditional expectation given X 0 in the form ÞÞ n for a suitable choice of S 00 : Now, note that the jump times together with the jump directions form an independently marked Poisson point process.
Since the intervals ½ÀM, M and tT þ ½ÀM, M are disjoint for sufficiently large T, the independence property of the Poisson point process gives that Hence, by the law of total covariance, it suffices to show that lim Combining the law of total covariance with Propositions 6 and 7 reduces this task to proving that lim Yð0, kÞÞ, h S 00 ð X 0 ðsÞ À X 0 ðtTÞ È É s2J d,M ðtTÞ , YðtT, kÞÞ j X 0 Þ ¼ 0: Now, setting a 0 ¼ ða þ d=2Þ=2, in the dense regime the sinks Y(0, k) are contained in B T a0=d ðoÞ whp, and the set of sinks Y(tT, k) does not intersect B T a0=d ðoÞ whp.Invoking the independence property of the Poisson process in disjoint domains, this establishes the vanishing of the covariance in (13), thereby concluding the proof of Proposition 5 in the dense regime.The high-level proof structure of Proposition 5 in the sparse regime a > d=2 is similar to that in dense regime discussed in Section 4.3.However, since the sinks Y are sparse in relation to the movement of the typical node they are not affected by the long-time averaging of the movement of the typical node.Therefore, we apply the second moment method conditioned on Y.
Proof of Proposition 5 a > d=2.As announced above, the proof consists of two steps.First, we replace s T ðghÞ by the conditional expectation E½s T ðghÞ j Y invoking the conditional second moment method.Second, we derive a more concise representation of the latter expression.The key observation in the sparse regime is that the set of relevant sinks does not change over time.That is, as we will show now, whp, Yð0, kÞ ¼ Yðt, kÞ for all t T: To achieve this goal, we note that the expected number of sinks in the annulus B ð1þeÞk=l ðoÞ n B k=l ðoÞ is given by ðð1 þ eÞ d À 1Þn S and therefore tends to 0 as e # 0: Hence, it suffices to show that for every fixed e > 0, whp, sup t T jX 0 ðtÞj ek=l: By the invariance principle, X 0 ðtTÞ= ffiffiffi ffi converges in distribution to a Brownian motion W t f g t 1 : Hence, lim T"1 Pðsup t T jX 0 ðtÞj K ffiffiffi ffi T p Þ tends to 1 as K " 1: Now, we conclude the proof of the claim by noting that in the sparse regime, for any fixed e, K > 0 we have K ffiffiffi ffi T p ek=l for all sufficiently large T !0: In order to carry out the second moment method, we note that the jump times together with jump directions form a marked Poisson point process.Since ½ÀM, M \ ðtT þ ½ÀM, MÞ ¼ ; for sufficiently large T > 0, we deduce that , Yð0, kÞÞ, h Sð X 0 ðsÞ À X 0 ðtTÞ È É s2J d,M ðtTÞ , YðtT, kÞÞ j YÞ ¼ 0: Hence, combining the law of total covariance with Propositions 6 and 7 we obtain the asserted decorrelation  Also in the critical regime a ¼ d=2, we follow the blueprint from Sections 4.3 and 4.4.This time, rðX 0 , YÞ is the appropriate r-algebra to condition on, so that the conditional decorrelation has already been established in Proposition 6.Conversely, the limit identification is now more involved.
Finally, in order to identify the distributional limit of Ð 1 0 S 00 ðYðtT, kÞÞhðtÞdt as T " 1, we proceed along the lines of Hirsch et al. [7] .To render the presentation self-contained, we recall the main steps of the proof.Introducing the unit-intensity process Y 0 :¼ Y= ffiffiffi ffi T p , we can represent Ð 1 0 S 00 ðYðtT, kÞÞhðtÞdt in the form By the invariance principle and the continuous mapping theorem, it suffices to show that the map sending a trajectory c to E½f ðFðY 0 , cÞÞ is continuous outside a zero-set with respect to Brownian motion.Now, let c n f g n be a sequence of trajectories.Then, Now, we conclude the proof by noting that the right-hand side tends to 0 as c n !c in the sup norm.

Figure 1 .
Figure 1.Sample of the simulated network at time 0 (left).Node displacement after time 100 (right).In order to illustrate the mobility more clearly, we only show a cutout of the full network.The network structure at time 0 is marked by blue dotted edges.

Figure 2 .
Figure 2. Average connection time (left), average weighted connection time (center), and reconnection rate (right) based on 1,000 simulations. w

( 1 )
inside Q L d ðX 0 ðjdÞÞ there exists a unique component of nodes in XðjdÞ of diameter at least L d =8; (2) inside Q L d ðX 0 ðjdÞÞ there exists a unique component of nodes in X i 0 ,d 0 ðjdÞ of diameter at least L d =8: Moreover, this component intersects C d 0 ,1 ði 0 d 0 Þ:

:
, nÞ :¼ E g I d,M s : x s 2 C À LðsÞSince the events E disj k \ E inert k occur whp, and since the intensity of X exc vanishes as k ! 1, we deduce that we may replace S -by S, thereby concluding the proof.w Lemma 11 is the key ingredient for the proof of Lemma 15.

T" 1 E 1 0E 1 0E
CovðgðI d,M ð0, N L ðYð0, kÞÞÞÞ, gðI d,M ðtT, N L ðYðtT, kÞÞÞÞ j YÞ Â Ã ¼ 0: The next step is the identification of the conditional expectation E½s T ðghÞ j Y, i.e., of ð gðI d,M ðtT, N L ðYðtT, kÞÞÞÞ j Y Â Ã hðtÞdt:Now, by applying Proposition 7 and recalling that the relevant sinks do not change over time, the latter integral becomes ð Sð X 0 ðsÞ À X 0 ðtTÞ È É s2J d,M ðtTÞ , Yð0, kÞÞ j Y h i hðtÞdt¼ E Sð 0 ðsÞ È É s2J d,Mð0Þ, Yð0, kÞÞ j Y follows from the time stationarity of the movement model.Moreover, Yð0, kÞ is a Poisson random variable with intensity n S : Thus, inserting the definition of S leads to the limiting representation asserted in Proposition 5.