Scaling limits of fluctuations of extended-source internal DLA

In a previous work, we showed that the 2D, extended-source internal DLA (IDLA) of Levine and Peres is δ3/5-close to its scaling limit, if δ is the lattice size. In this paper, we investigate the scaling limits of the fluctuations themselves. Namely, we show that two naturally defined error functions, which measure the “lateness” of lattice points at one time and at all times, respectively, converge to geometry-dependent Gaussian random fields. We use these results to calculate point-correlation functions associated with the fluctuations of the flow. Along the way, we demonstrate similar δ3/5 bounds on the fluctuations of the related divisible sandpile model of Levine and Peres, and we generalize the results of our previous work to a larger class of extended sources.


Introduction
Internal diffusion-limited aggregation (IDLA) is a lattice growth model, tracking the growth of a random set A(t) ⊂ Z d defined as follows.At each time t, we start a particle at the origin, and we let it undergo a simple random until it first exits the set A(t − 1)-supposing it exits at the point z t , we set A(t) = A(t − 1) ∪ {z t }.Intuitively, this process follows the diffusion of particles from an origin-centered source.In fact, it was originally proposed by the chemical physicists Meakin and Deutch [MD86] in order to model such diffusive processes, such as the smoothing of a spherical surface by electrochemical polishing.
We are interested in a generalization of this model to the extended-source case, wherein particles start instead from discretizations of a fixed mass distribution, and the lattice size is allowed to grow arbitrarily small.This generalization was first introduced and studied by Levine and Peres [LP08], although it corresponds to Diaconis and Fulton's earlier notion of a "smash sum" of two sets [DF91].
In both cases, a primary question of study is the overall smoothness of the occupied set A(t).Following the work of Lawler, Bramson, and Griffeath [LBG92], it is well-known that-in the point-source case-these sets closely approximate an origin-centered ball for large t.Several authors have shown strong convergence rates for this process [Law95,AG10]; most recently, Jerison, Levine, and Sheffield proved that the fluctuations away from the disk are at most of order log t in dimension 2, narrowly improving a log 2 t result by Asselah and Gaudillière [JLS12,AG13a].Independent works by Asselah and Gaudillière and by Jerison, Levine, and Sheffield proved bounds of order √ log t in higher dimensions [AG13b,JLS13], which have been shown to be tight [AG11].In the extended-source case, Levine and Peres first showed that the scaling limits of IDLA correspond to solutions of a closely related free boundary problem [LP08].We recently proved that, if the lattice size is δ, the fluctuations of IDLA away from this expected set are at most of order δ 3/5 .
The fluctuations can also be studied "on the aggregate", however, which provides interesting insight into the geometry of the problem.Namely, we are interested in studying mean fluctuations over an area of finite volume, as weighted by a test function u ∈ C ∞ (R d ).To do this in the point-source case, Jerison et al [JLS14] introduced natural error functions on the lattice δZ d , which quantify how late or early the IDLA process is in getting to a given point.Specifically, they introduced a fluctuation function E s and a lateness function L, that capture fluctuations at a single time s and at all times, respectively.They proved that these error functions weakly approach certain Gaussian random fields as the lattice spacing δ decreases, allowing them to find the scaling limits of fluctuations integrated against a test function u.Eli Sadovnik studied this question more recently for an extended source, in the special case of the single-time fluctuation function and with discrete harmonic test functions [Sad16].
In this paper, we extend the techniques used in [JLS14] and [Sad16] in order to prove more general scaling limits of random error functions in the extended-source case.Our main results, Theorems 3.1 and 3.2, show that the fluctuation function E s and the lateness function L converge weakly to geometry-dependent Gaussian random fields, allowing for any C 4 test functions.In particular, by choosing highly localized test functions, we will be able to calculate "point-correlation functions", which encode the correlations between fluctuations of IDLA at two different points in space.
It must be noted that, in the point-source case of [JLS14], the functions E s and L measure fluctuations away from a previously-calculated continuous limit of IDLA; specifically, they measure the difference between A(t) and a smooth sphere.To our knowledge, this is not possible in the general-source case without stronger estimates on the convergence of discrete harmonic functions-as such, our general-source versions of E s and L compare IDLA to a closely related deterministic process: the discrete sandpile model of Levine and Peres [LP10].We show in Theorem 2.7 that the discrete sandpile converges at least as quickly as the best known estimates (from [Dar20]) on IDLA.In fact, we believe that it converges faster than IDLA, but the estimate from Theorem 2.7 is sufficient for our purposes.
After briefly reviewing the necessary theory, we introduce our primary results in Section 3. The following sections are spent proving these results; Section 4 proves the scaling limit of the fixed-time fluctuation function, and Section 5 proves that of the lateness function.Finally, we use these results to calculate point correlation functions of IDLA fluctuations in Section 6.

Review of lattice growth processes
Here we provide a background on extended-source IDLA and on a related deterministic process, the divisible sandpile growth introduced also by Levine and Peres [LP08].Many of our specific definitions are taken from our preceding paper, [Dar20]; see that paper for more information.
Following from [Dar20], we restrict attention to IDLA processes started from concentrated mass distributions.
Definition 2.1.Let D 0 ⊂ R 2 be a compact, connected domain with smooth boundary, and fix N ∈ Z ≥0 and T 1 , ..., T N ∈ R ≥0 .For each i = 1, ..., N and s ∈ [0, T i ], suppose Q s i ⊂ D 0 satisfies the following properties: 4. ∂Q s i is rectifiable, with arclength bounded independently of s.Finally, set T = k T k , and fix increasing functions In this setting, the concentrated mass distribution associated to the data (D 0 , {Q s i }, {s i }) is the map σ s : R 2 → Z ≥0 defined by In short, a concentrated mass distribution is a collection of increasing subsets Q s i i of D 0 , such that the total mass at any time s is vol(D 0 ) + s.The functions s i give the mass of each subset Q s i i at the time s.The analysis of this paper holds in its entirety for infinite mass distributions, where T = ∞.For these, we simply require that the finite-time collections {Q s i }| s≤T and {σ s }| s≤T give rise to concentrated mass distributions for any T > 0. We will assume that T < ∞, but we can also imagine that we have simply "cut off" an infinite mass distribution in the manner just described.
Since we are studying processes on discrete lattices, we are primarily interested in the restrictions of these mass functions to the grid 1 m Z 2 .Write S m s for the multiset defined by we have that z ∈ S m s with multiplicity σ s (z) − 1.We can order S m T into a sequence z m,j of source points as follows: , where i(z, j) is the multiplicity of z in {z m,1 , ..., z m,j−1 }.
In short, we are simply ordering the particles in S m T in the order they appear (accounting for multiplicity) in the sets {S m s }.Given this sequence, we define the discrete densities σ m,n = 1 1 m Z 2 ∩D 0 + n i=1 1 {z m,i } .It is clear that σ m,n differs from its continuum limit σ n/m 2 at only O(m −2 ) points, accounting for multiplicity.
The (resolution m) internal DLA (IDLA) associated with the mass distribution is the following process: Definition 2.2 (Internal DLA).Suppose we have a concentrated mass distribution with initial set D 0 giving rise to the sequences {z m,t }.The IDLA A m (t) associated with the mass distribution is as follows.Define the initial set A m (0) = 1 m Z 2 ∩ D 0 .Then, for each integer t ≥ 1, start a simple random walk at z m,t , and let z t be the first point in the walk outside the set A m (t − 1)-then A m (t) := A m (t − 1) ∪ {z t }.
Importantly, the law of A m (t) does not depend on the order of {z m,1 , ..., z m,t }, as proven by Diaconis and Fulton [DF91, Lemma 2.2] (see [DF91, Section 3] for the application of this result to our setting).
For each time s, the sets A m (m 2 s) approach a deterministic limit D s almost surely, where D s is the Diaconis-Fulton "smash" sum The smash sum operation is as defined in [LP10]: , we define the discrete smash sum A ⊕ B as follows.Let C 0 = A ∪ B, and for each x i ∈ {x 1 , ..., x n } = A ∩ B, start a simple random walk at x i and stop it upon exiting C i−1 .Let y i be its final position, and define As proven in [LP10, Theorem 1.3], if we instead take domains A, B ⊂ R 2 , the smash sums A m ⊕ B m of approach a deterministic limit, which we label A ⊕ B. The convergence A m (m 2 s) → D s was shown originally by Levine and Peres [LP10].In [Dar20, Theorem 3.1], we have recently shown the following convergence rate for this scaling limit, in the special case that D s is a smooth flow-that is, that s → D s is a smooth isotopy for s ∈ [0, T ].
Lemma 2.4.Suppose D s is a smooth flow arising from a concentrated mass distribution.For large enough m, the fluctuation of the associated IDLA A m (t) is bounded as for a constant C depending on the flow, where (D s ) ε and (D s ) ε denote outer-and inner-ε-neighborhoods of D s , respectively.
In other words, the fluctuations of the random set A m (m 2 s) are unlikely to be larger in magnitude than Cm −3/5 , for some fixed C > 0. We will also use this to bound the maximum fluctuations of a closely related process-the divisible sandpile growth-defined as follows: Definition 2.5 (Divisible Sandpile).Suppose we have a concentrated mass distribution with initial set D 0 giving rise to the sequences {z m,i }.The divisible sandpile aggregation associated with our mass distribution is characterized by its final mass distributions ν m,t .Let ν m,0 := 1 Am(0) = 1 D 0 ∩ 1 m Z 2 , and define ν m,n inductively as follows.
Given ν m,n , define the intermediate function In other words, we define ν t+1 m,n by taking the "excess mass" at z in ν t m,n and splitting it evenly between the neighbors of z.For a large enough (but finite) t , we will have ν t m,n ≤ 1 everywhere, and the above process must stop; then define ν m,n+1 = ν t m,n .
The following proposition follows directly from the definition of the divisible sandpile: Finally, we find the same m −3/5 -bound on maximum fluctuations for the divisible sandpile as we do for IDLA; the following theorem is proved in the Appendix: Theorem 2.7.Suppose D τ is a smooth flow arising from a concentrated mass distribution.For large enough m and any time s ∈ [0, T ], the fluctuations of the occupied set supp ν m,m 2 s are bounded as for a constant C depending on the flow.
To simplify notation, we will continue to write ε m = Cm −3/5 , as in Lemmas 2.4 and 2.7.We further define

Main results
Our two primary results pertain to scaling limits of the fluctuations of A m (t) away from its deterministic limit.Following [JLS14], we quantify these fluctuations using the following random functions.First, the (time s) error function This takes a positive value on "early" points, where the IDLA A m has reached by time m 2 s but where the expected set-represented here by the divisible sandpile occupied set-has not yet reached.It takes a negative value on "late" points, where the divisible sandpile occupied set has reached but the IDLA has not.Although E s m itself does not converge (in m) to a well-defined random variable, our primary objects of interest are the limits of inner products We can think of (E s m , u) as a snapshot of the discrepancy at the fixed time s, weighted by the function u ∈ C 4 (R 2 ).
Through the following theorem, we show that E s m converges weakly to a Gaussian random field on the fixed-time curve ∂D s : Of course, we can turn this into a covariance formula using a polarization identity where ψ and ϕ are harmonic on D s with ψ|∂D s ≡ u and ϕ|∂D s ≡ v, respectively.A more sensitive metric is given by the lateness function, Up to a scaling factor, the first term in this expression is the actual time of arrival at each point.The latter term is an approximation of the expected time of arrival, which we can see as follows.
Suppose x ∈ 1 m Z d , and T m,x is the expected time for A m,n to arrive at x.For a brief window around T x , the quantity ν m,n (x) lies strictly between 0 and 1-say, when n ∈ {n , n + 1, ..., n + ∆ − 1}.As ν m,n (x) is constant before n and after n + ∆, only the terms involving {n , n + 1, ..., n + ∆ − 1} contribute to the sum (2) Of course, the increments (ν m,n − ν m,n−1 ) (x) are non-negative, and so the sum ( 2) is a weighted average of {n , n + 1, ..., n + ∆}.As this interval is tightly centered around T m,x , we expect the overall sum to converge (in m) to T m,x .
Our second result is in the same spirit as Theorem 3.1, showing now that the lateness function converges weakly to a 2D Gaussian random field: The random variables (L s m , u) converge in law to a normal variable of mean 0 and variance where ψ t solves the Laplace problem on D t with boundary values (3) After proving these results in the following sections, we will turn to an interesting application of Theorem 3.2.Namely, we will use Equations 1 and 3 to compute point-correlation functions, which encode the correlations between fluctuations at two different points.In some sense, point-correlation functions will be local versions of the above results.
4 Proof of Theorem 3.1 In our analysis below, we will make use of the grids In particular, we will use the following notion of a grid harmonic function on G m : , and φ is linear on each edge of G m .
Finally, we will let F m,t be the filtration generated by A m (t).
Proof of Theorem 3.1, Step 1.We will first relate (E s m , u) to a family of martingales and show that the difference converges in law to zero.
Let ε m = Cm −3/5 , as in Lemmas 2.4 and 2.7, and let ψ m be harmonic on (D s ) 2εm with boundary values u| ∂(Ds) 2εm .Let ψ (m) solve the corresponding grid-Laplace problem on Since ψ (m) = ψ m on the boundary, standard estimates (for instance, see [Che, Theorem 3.5]) give where C 1 ∼ ∇ 4 u .Next, define the martingales where τ * is the first time that which converges to zero.Now, for any δ > 0, we can choose m 0 > 0 such that ) ≤ δ on event E for any m > m 0 .The probability of E c tends to zero, so we know that (E s m , u) − M m (m 2 s) converges in probability (and thus in law) to zero.
Step 2. Now, we will show that the family of random variables M m (m 2 s) converges in law to a zero-mean normal variable with variance Ds |ψ| 2 (1 − σ s ).
For this step, we will follow the style of proof in [Sad16].Define This is a mean-zero martingale difference array adapted to F m,t .The martingale central limit theorem stated in [HH80, Theorem 3.2] thus states that M m (m 2 s) = t≤m 2 s X m,t converges in law to a normal variable of mean 0 and variance Ds |ψ| 2 (1 − σ s ), so long as the following three conditions hold: 1. E max t |X m,t | 2 is bounded in m.This also implies that the array is square-integrable, which is one of the hypotheses of the theorem.

t≤m
As in [Sadovnik], we will handle the first two conditions by showing that E[max t |X m,t | a ] → 0 for a ≥ 1.This is clear from the following estimate: from the maximum principle.
For the final condition, define the random variables Our goal is to show that N m (m 2 s) → 0 in probability, and thus that S m (t) can be well-approximated by the simpler variable Z m (t).For this, first note that N m satisfies the martingale property; we only need show this for time intervals before τ * , as N m remains constant thereafter.For t ≤ τ * , from the martingale property.Again taking t ≤ τ * , we estimate where Thus, N m (m 2 s) → 0 in the L 2 norm, and thus also in probability.
Finally, we show that Z m (m 2 s) → Ds |ψ| 2 (1 − σ s ) in probability.From the above argument, this would imply that S m (m 2 s) → Ds |ψ| 2 (1 − σ s ) in probability, which is exactly the third condition of the martingale central limit theorem.
For this purpose, note that, on event E (where On Event E, we know that A m (m 2 s) differs from D s ∩ 1 m Z 2 by at most O(m 2 ε m ) points; in this case, as |ψ (m) (x)| 2 is uniformly bounded (as we saw above) in terms of u and using (4) in the final step.Now, we compare m −2 , where ψ solves the Laplace equation on D s with ψ|∂D s ≡ u|∂D s .For this, suppose that x ∈ ∂D s maximizes |ψ − ψ m |, and take Of course, ψ − ψ m is harmonic in D s , so the maximum principle implies sup Ds |ψ − ψ m | ≤ 8ε m C 1 .Arguing as before, we find Putting these inequalities together shows that on Event E. Since P (E) 1, this implies that the above difference converges in probability to zero.Finally, , so the theorem is proved.

Proof of Theorem 3.2
We prove a slight generalization of this result, in the case that supp u is not necessarily contained in D s : Lemma 5.1.Suppose u ∈ C 4 (R 2 ).The random variables (L s m , u) converge in law to a normal variable of mean 0 and variance where ψ t solves the Laplace problem on D t with boundary values ψ| ∂Dt ≡ u| ∂Dt .
Remark.In the case of interest, with supp u ⊂ D s , we have that ψ s ≡ 0 and thus that the above variance becomes Step 1.We first want to replace (L s m , u) with a suitable martingale.Let ψ t m solve the Dirichlet problem for u on (D t ) 2εm , with ε m = Cm −3/5 .Let ψ t (m) solve the corresponding grid-Laplace problem on G m ∩ (D t ) 2εm .As in the proof of Theorem 3.1, this means that where C 1 ∼ ∇ 4 u .Now define the martingale where τ is first time that A m (j) exits (D /m 2 ) εm , and τ * := τ m 2 s is the first time that it exits (D s ) εm .Consider the event E, in which εm for all ≤ m 2 s.By Lemma 2.4, this occurs with probability 1 − e −m 2/5 1.On this event, τ ≥ for all , and Of course, on event E, the function = O(ε m ) as in the previous proof.Then we have Thus, M m (m 2 s) − (L s m , u) converges to zero in probability.
Step 2. Note that the martingale intervals X m,t = M m (t) − M m (t − 1) take the following form: Now we need to show that M m (m 2 s) approaches the appropriate normal distribution.We will again make use of the martingale central limit theorem [HH80, Theorem 3.2]-namely, our result is proved if we can show the following three conditions: 2. max t |X m,t | → 0 in probability as m → ∞.

3.
t |X m,t | 2 converges to the expression in (6) in probability as m → ∞.
The first and second conditions follow from the following calculation, that which proves the first two conditions.For the final condition, we again define auxiliary variables As before, N m satisfies the martingale property.To see this, we first factor the intervals of Z m as follows; below, write a m,t := A m (t) \ A m (t − 1) for the t th point joined to our IDLA.
We thus see that the only remaining terms of ) are the following cross-terms, from which the martingale property follows: using the fact that the last variable is a linear combination of martingale intervals adapted to F m,t .As in the proof of Theorem 3.1, we can use this martingale property to show that-since (S m (t ) and thus know that N m (m 2 s) → 0 in probability.
Finally, on event E, we estimate Z m (m 2 s) as follows: Now, from (7), we can continue with the substitutions ψ τ (m) → ψ τ m : On event E, the sets A m (j) and D j/m 2 differ by at most O(m 2 ε m ) points on the lattice 1 m Z 2 , so we can replace 1 Am(j) → 1 D j/m 2 with only an additional O(ε m ) error: Finally, since the derivatives of ψ τ m are uniformly bounded in both m and τ , we can swap these sums with the appropriate integrals with an error of O(m −1 ) (which we wrap into the existing O(ε m ) term):

Point correlation functions
In this section, we will compute point-correlation functions for extended-source IDLA.In short, we want to find a local version of Equation 3, which would tell us the correlation between IDLA fluctuations at two specific points, p, q ∈ int(D T ) \ D 0 .We phrase this problem in terms of limits of smooth bump functions, which we already know how to handle from our main results.Fix ε > 0, and let η ε p and η ε q be smooth functions satisfying Without loss of generality, we will assume that B ε (p), B ε (q) ⊂ D T , and we will write L m := L T m for the lateness function at time T .From Theorem 3.2, we know that (L m , η ε p ) [resp., (L m , η ε q )] tends to a Gaussian variable L(η ε p ) [resp., L(η ε q )] in m.Our primary result is the following: Theorem 6.1.Suppose p, q ∈ int(D T ) \ D 0 , and s p and s q satisfy p ∈ ∂D sp , q ∈ ∂D sq .For ε > 0, further suppose that η ε q and η ε p are smooth functions satisfying (9).The covariance between where s * = min(s p , s q ), v p and v q are the velocities of the flow s → D s at p and q (at times s p and s q , respectively), and F p and F q are the Poisson kernels of D sp and D sq at p and q, respectively.
For completeness' sake, we first recall the notion of a Poisson kernel : Definition 6.2.Suppose D is a smoothly bounded domain, and p ∈ ∂D.Then the Poisson kernel F p,D of D at p is the harmonic function on int(D) satisfying where G D (x, y) is the continuous Green's function for the domain D, and ∂/∂n is the inward normal derivative with respect to the second variable.Importantly, if f : C 0 (∂D), then the function is harmonic on D and satisfies φ| ∂D ≡ f .We will use these functions in the following context.If p ∈ int(D T ) \ D 0 , there is a unique s p > 0 such that p ∈ ∂D sp .We write F p for the Poisson kernel of D sp at p. Proof of Theorem 6.1.We first deal with the case that s p = s q , so that p and q are hit at different times by the flow s → D s .Without loss of generality, suppose s p > s q , and suppose ε is small enough that From (3), we know that where ψ ε s and ϕ ε s are harmonic functions on D s satisfying ψ ε s |∂D s ≡ η ε p and ϕ ε s |∂D s ≡ η ε q .In the above formula, we removed the ψ ε s ϕ ε s term that appears in (3); these terms must all vanish, from (12).Next, note that the remaining terms can only be nonzero when s [resp., s ] lies in a thin (i.e., O(ε)) band around s p [resp, s q ].Define s − := inf{s | B ε (q) ∩ ∂D s = ∅} to be the smallest value of s such that ϕ ε s is nonzero.From our above discussion, we can write also using the continuity of s → σ s .Note that the third integral is now always taken over the same set.Now, introduce coordinates (s, θ) near p such that (s, •) ∈ D s and such that θ|∂D s measures the (signed) arclength from (s, 0) along ∂D s .Introduce similar coordinates (s, α) near q.Without loss of generality, we assume p = (s p , 0).From (11), we can rewrite which gives the following formula for the covariance: Now, η ε p is supported on an ε-ball around p, so we only have to consider F (s,θ) for (s, θ) ∈ B ε (p).For these points, we find that 1 |F (s,θ) (z) − F p (z)| ≤ εC |z−p| 2 , and thus Repeating the same argument for F (s,α) and F q (and using the fact that F p and F q are bounded near the pole of the other), we find Finally, we convert from the coordinates (s, θ) and (s, α) back to standard Euclidean coordinates.For this, note that (s, θ) and (s, α) are orthogonal coordinate systems, and that θ and α are unit-speed parametrized, by definition.Thus, the only contributions to |d(s, θ)/d(x, y)| and |d(s, α)/d(x, y)| are the scaling factors in the s-direction.These are exactly the (inverse) velocities v(s, θ) −1 and v(s, α) −1 of the flow s → D s , and we find and similarly ds dα η ε q (s , α) = v −1 q + O(ε).Putting these ingredients together, we get wrapping the O(ε) error term from switching s − to s * into the existing O(ε 1/3 ) error.
1 For instance, we can find this estimate by first comparing F (s,θ) and Fp to nearby Green's functions using (10), and then comparing the Green's functions to one another by bounding their gradients above as |∇xGD(x, y)| ≤ C|x − y| −1 .
Now, assume that s p = s q , and suppose ε is small enough that B ε (p) and B ε (q) are disjoint.Let We can split these terms as follows: At this point, we can follow the same logic as in the first case, and the theorem follows.
We can apply this formula concretely to the case of a radially-expanding disk.Suppose that D 0 is the unit disk, and set Q s 0 = B √ s/π for s ∈ [0, 1).In this setting, we can imagine our source as a collection of outwardly moving rings of radius 0 ≤ r < 1, as shown in Figure 2. From symmetry considerations, it is clear that D s = B √ 1+s/π are outwardly expanding disks.Suppose p and q lie in the plane, on origin-centered circles of radii 1 < r q ≤ r p < √ 2 and at polar angles θ p , θ q .The functions F p and F q take the following forms: Figure 3: A plot and contour map of the function g(p, q), where q = ( 1 /2 + √ 5 /4, 0) is fixed.Note the logarithmic singularity at p = q, and that the function vanishes for p along the unit circle (the inner boundary of the domain).
The function g(p, q) has several key properties, which we can see in Figure 3.For one, g(p, q) is positive if and only if θ p and θ q are nearby.This confirms the geometric intuition that, for instance, an early point leads to other nearby early points, but that it prevents distant early points (by using up particle mass itself).Secondly, g(p, q) vanishes as either p or q approaches the unit disk, likely reflecting the fact that points nearer to D 0 are "more deterministic"-i.e., that the variance of their lateness decreases to 0 as they get closer to D 0 .It is easy to check from Theorem 6.1 that this property holds true for other flows, as well, with the unit disk replaced by D 0 in general.
Next, note the logarithmic singularity present at p = q.This is to be expected, in analogy to the (free space) Green's function G(x, y) = log |x − y|.Indeed, just as we can view the Green's function as giving an inner product (u, v) −1 := dxdy u(x)G(x, y)v(y) = dy (∇ −2 u)(y)v(y), we can view the point-correlation function g as the kernel of the inner product defined in Equation 3: where, as before, ψ s and ϕ s are the solutions of the Dirichlet problem on D s for u and v, respectively.

Directions for further research
One interesting extension of this work would be to extend these results to higher dimensions.In dimension d, the appropriate scaling factor for the fluctuation functions E s m and L s m would be m d/2 (just as it is m = m 2/2 here).In general, then, the error found in (5) and (8) would come out to be O(m d/2 ε 2 m ).For this to decrease, we then need the bound ε m = o(m −d/4 ) on the maximum fluctuations.This is likely possible to achieve if d = 3, but clearly impossible for d ≥ 4.
However, there are weaker results that remain possible for d ≥ 4. For one, if we require the test function u to be harmonic, then we could achieve u − ψ (m) ∞ = O(m −2 ) on the domain of interest, rather than our existing u − ψ (m) ∞ = O(ε m ).In this case, the requirement on ε m becomes ε m = o(m 2−d/2 ), which now appears possible for dimensions 4 and 5.
Another important direction of research would be to generalize the sorts of possible sources for IDLA.For instance, it would be interesting to see if corresponding scaling limits hold if, instead of starting from a concentrated mass distribution, we were to start points evenly from a submanifold of D 0 .Starting from the boundary of D 0 , for example, may provide a good substitute for starting particles evenly across D 0 itself.In chemical applications, this adjusted setting could model a solid particle source of a particular shape.
Fortunately, the methods used in this paper translate fairly straightforwardly to other settings.The greatest obstacle to generalizing our results is finding an analogue to Lemma 2.4, which was the primary result of the preceding paper [Dar20].Indeed, if it could be shown that the fluctuations of IDLA from a particular source satisfy a similar O(m −1/2−ε ) bound (for any ε > 0), the remainder of our argument could likely be repeated.
A Appendix: maximum fluctuations of the divisible sandpile We will use the capital N m (t) to denote the fully occupied set We will also use the notation of [Dar20]-in particular, for any ζ ∈ 1 m Z 2 \ D 0 , we will write τ = τ (ζ) for the time at which ζ ∈ ∂D τ , and we will use H ζ and Ω ζ exactly as in that paper.We will not give more details on these objects here.Now, we say that a point z ∈ 1 m Z 2 is ε-early at time t if z ∈ N m (t), but z / ∈ (D t/m 2 ) ε .Similarly, z is ε-late at time t if z ∈ (D t/m 2 ) ε , but z / ∈ N m (t).Finally, we will define a stopped version of ν m,t , as follows: For a large enough t , we have that ν t ζ,n (z) ≤ 1 everywhere in supp ν t m,n \ ∂D τ ; then we define ν ζ,n+1 = ν t ζ,n .In parallel with the original divisible sandpile model, we define ν ζ,n+1 by taking the excess mass at z in ν ζ,n and splitting it around the edge of supp ν ζ,n according to a discrete harmonic measure.New in this case, however, is that we stop mass before it exits the domain D τ .
Note that this satisfies the same key equality as the original harmonic measure; namely, for any grid harmonic (see [Dar20])

Figure 1 :
Figure 1: The smash sum A ⊕ B is the deterministic limit of an IDLA-type growth process starting from the sets A and B, representing the dispersal of particles in A ∩ B (in dark blue above) to the edges of A ∪ B (in yellow above).
in law to a normal variable of mean 0 and variance Ds |ψ| 2 (1 − σ s ), where ψ solves the Laplace problem on D s with boundary values ψ| ∂Ds ≡ u| ∂Ds .

Figure 2 :
Figure 2: An illustration of the flow s → D s in the case of a radially-expanding disk.Here, D 0 is the unit disk (cyan, left), and our single source set Q T 0 is a smaller disk within it (red, left).As time passes, source points move from the center of Q s 0 to the outer boundary of D s .
so the function ψ (m) is defined on all of A m (τ Suppose x ∈ F s m achieves this supremum, and choose x ∈ ∂(D s ) 2εm such that |x − x | ≤ 4ε m .Now, ∂ i ψ m solves the Laplace equation with boundary values ∂ i u| ∂(Ds) 2εm , so by the maximum principle, * ).Consider the event E that supp E s m ⊂ F s m ; by Lemmas 2.4 and 2.7, this event occurs with probability 1−e −m 2/5 1.In this case, τ * ≥ m 2 s, so-since ψ (m) is discrete harmonic-we have M m (m 2 s) = (E s m , ψ (m) ).To relate this to (E s m , u), we first want to bound sup F s m |u − ψ m |.