Neighborhood growth dynamics on the Hamming plane

We initiate the study of general neighborhood growth dynamics on two dimensional Hamming graphs. The decision to add a point is made by counting the currently occupied points on the horizontal and the vertical line through it, and checking whether the pair of counts lies outside a fixed Young diagram. We focus on two related extremal quantities. The first is the size of the smallest set that eventually occupies the entire plane. The second is the minimum of an energy-entropy functional that comes from the scaling of the probability of eventual full occupation versus the density of the initial product measure within a rectangle. We demonstrate the existence of this scaling and study these quantities for large Young diagrams.


Introduction
We consider a long-range deterministic growth process on the discrete plane, restricted for convenience to the first quadrant Z 2 + . This dynamics iteratively enlarges a subset of Z 2 + by adding points based on counts on the entire horizontal and vertical lines through them. The connectivity is therefore that of a two-dimensional Hamming graph, that is, a Cartesian product of two complete graphs. The papers [Siv, GHPS, Sli, BBLN] address some percolation and growth processes on vertices of Hamming graphs, but such highly nonlocal growth models remain largely unexplored. In particular, the few two-dimensional problems addressed so far appear to be too limited to offer much insight, and we seek to remedy this with a class of models we now introduce.
For integers a, b ∈ N 2 , we let R a,b = ([0, a − 1] × [0, b − 1]) ∩ Z 2 + be the discrete a × b rectangle. A set Z = ∪ (a,b)∈I R a,b , given by a union of rectangles over some set I ⊆ N 2 , is called a (discrete) zero-set. We allow the trivial case Z = ∅, and also the possibility that Z is infinite. However, in most of the paper the zero-sets will be finite and therefore equivalent to Young diagrams in the French notation [Rom] (see Figure 1.1a). Our dynamics will be given by iteration of a growth transformation T : 2 Z 2 + → 2 Z 2 + , and will be determined by the associated zero-set Z, so we will commonly not distinguish between the two.
Fix a zero-set Z. Suppose A ⊆ Z 2 + and x ∈ Z 2 + . Let L h (x) and L v (x) be the horizontal and the vertical line through x, so that the neighborhood of x is L h (x) ∪ L v (x). If x ∈ A, then x ∈ T (A). If x / ∈ A, we compute the horizontal and vertical counts form the pair (u, v) = (row(x, A), col(x, A)), and declare x ∈ T (A) if and only if (u, v) / ∈ Z. Observe that, by definition of a zero set, monotonicity holds: A ⊆ A implies T (A) ⊆ T (A ). We call such a rule a neighborhood growth rule. So defined, this class in fact comprises all rules that satisfy the natural monotonicity and symmetry assumptions and have only nearest-neighbor dependence under the Hamming connectivity; see Section 2.1.
A given initial set A ⊆ Z 2 + and T then specify the discrete-time trajectory: A t = T t (A) for t ≥ 0. The points in A t and A c t are respectively called occupied and empty at time t. We define A ∞ = T ∞ (A) = ∪ t≥0 A t to be the set of eventually occupied points. We say that the set A spans if A ∞ = Z 2 + . We also say that a set B ⊆ Z 2 + is spanned if B ⊆ T ∞ (A) and that B is internally spanned by A if the dynamics restricted to B spans it: B = T ∞ (A ∩ B). See Figure 1.1b for an example of these dynamics.
The central theme of this paper is minimization of certain functionals on the set A of all finite spanning sets. Perhaps the simplest such functional is the cardinality, which results in the quantity γ(T ) = γ(Z) = min{|A| : A ∈ A}.
Our second functional is related but requires further explanation and notation, and we will introduce it below when we state our main results. We first put the topic in the context of previous work.
The best known special case of neighborhood growth is given by an integer threshold θ ≥ 1, with the rule that x joins the occupied set whenever the entire neighborhood count is at least θ. This rule makes sense on any graph; in our case it translates to triangular Z = T θ = {(u, v) : u + v ≤ θ − 1}. Such dynamics are known by the name of threshold growth [GG1] or bootstrap percolation [CLR]. Bootstrap percolation on graphs with short range connectivity has a long and distinguished history as a model for metastability and nucleation. The most common setting (a) A zero-set Z (grey region). Shapes on external boundary correspond to distinct minimal neighborhood counts that will result in occupation of vertices. E.g., the diamond signifies occupation by having at least one horizontal and at least two vertical neighbors.  is a graph of the form [k] , a Cartesian product of path graphs of k points, and thus with standard nearest neighbor lattice connectivity. The foundational mathematical paper is [AL], which studied what we call the classic bootstrap percolation, which is the process with θ = 2 on [n] 2 . A brief summary of this paper's ongoing legacy is impossible, so we mention only a few notable successors: [Hol] gives the precise asymptotics for the classical bootstrap percolation; [BBDM] extends the result for all [n] d and θ; the hypercube [2] n with θ = 2 is analyzed in [BB, BBDM]; and a recent paper [BDMS] addresses a bootstrap percolation model with drift. The main focus of the voluminous research is estimation of the critical probability on large finite sets, that is, the initial occupation density p c that makes spanning occur with probability 1/2. It is typical for this class of models that p c approaches zero very slowly with increasing system size, certainly slower than any power, and that the transition in the probability of spanning from small to close to 1 near p c is very sharp. For example, p c ∼ π 2 /(18 log n) for the classic bootstrap percolation [Hol]. Neither slow decay nor sharp transition happen for supercritical threshold growth on the two-dimensional lattice [GG1] or threshold growth on Hamming graphs [GHPS, Sli], where instead power laws hold. One of our main results, Theorem 1.3, shows that, for any neighborhood growth, there is a well-defined power-law relationship between the density of the initial set, the size of the system, and the probability of spanning.
Another special case is the line growth, where Z = R a,b for some a, b ∈ N. This was introduced under the name line percolation in the recent paper [BBLN], which proves that γ(R a,b ) = ab, establishes a similar result in higher dimensions, and obtains the large deviation rate (defined below) for Z = R a,a on a square. Some of our results are therefore extensions of those in [BBLN]. In particular, one may ask for which Z the equality γ(Z) = γ(R a,b ) holds for some R a,b ⊆ Z. We discuss this in Section 2.5.
Extremal problems play a prominent role in growth models: they feature in the estimation of the nucleation probability, but they are also interesting in their own right. For bootstrap percolation, the size of the smallest spanning subset for [n] d when θ = 2 is known to be d(n − 1)/2 + 1 for all n and d [BBM]; the clever argument that the smallest spanning set for classic bootstrap percolation on [n] 2 has size n is a folk classic. The situation is much murkier for larger θ; see [BPe, BBM] for a review of known results and conjectures for low-dimensional lattices [n] d and hypercubes [2] n . The smallest spanning sets have also been studied for bootstrap percolation on trees [Rie2] and certain hypergraphs [BBMR]. However, the closest parallel to the analysis of γ in the present paper is the large neighborhood setting for the threshold growth model on Z 2 from [GG2]. Several related extremal questions, which are not considered in this paper, are also of interest. For example, one may ask for the largest size of the inclusion-minimal set that spans ( [Mor] addresses this for the classic bootstrap percolation, [Rie1] for hypercubes with θ = 2, and [Rie2] for trees), or for the longest time that a spanning set may take to span (this is the subject of a recent paper [BPr] on the classic bootstrap percolation).
We now proceed to our main results, beginning with a theorem that gives basic information on the size of γ. The upper bound we give cannot be improved, as it is achieved by the line growth. We do not know whether the 1/4 in the lower bound can be replaced by a larger number. |Z| ≤ γ(Z) ≤ |Z|.
Assume that the initially occupied set is restricted to a rectangle R N,M , which is large enough to include the entire Z (which is then, of course, finite). Then, as it is easy to see, the dynamics spans Z 2 + if and only if it internally spans R N,M . As all our rectangles will satisfy this assumption, we will not distinguish between spanning and their internal spanning. Now, one may ask if a configuration restricted to the interior of such a rectangle requires more sites to span than an unrestricted configuration. Our next result answers this question in the negative, establishing a property of obvious importance for a computer search for smallest spanning sets. Theorem 1.2. Assume that a 0 , b 0 ∈ N are such that Z ⊆ R a 0 ,b 0 . Then γ(Z) = min{|A| : A ∈ A and A ⊆ R a 0 ,b 0 }.
Next we consider spanning by random subsets of rectangles R N,M . Assume that the initial configuration is restricted to R N,M , where it is chosen according to a product measure with a small density p > 0. The possibly unequal sizes N and M need to increase as p → 0, and, given that in all known cases spanning probabilities on Hamming graphs obey power laws [GHPS, BBLN], it is natural to suppose that they scale as powers of p. Thus we fix α, β ≥ 0 and assume that, as p → 0, N, M → ∞ and log N ∼ −α log p, log M ∼ −β log p.
We will denote by Span the event that the so defined initial set spans, and turn our attention to the question of the resulting power-law scaling for P p (Span). The answer will involve finding the optimal energy-entropy balance, so there is a conceptual connection with large deviation theory, despite the fact that the probabilities involved are not exponential. Thus we call the quantity log p the large deviation rate for the event Span, provided it exists.
The rate I is given as the minimum, over the spanning sets, of the functional ρ that we now define. For a finite set A ⊆ Z 2 + , let π x (A) and π y (A) be projections of A on the x-axis and y-axis, respectively. Then let The term |B| represents the energy of the subset B and the linear combination of sizes of the two projections the entropy of B. In the next theorem, we use the following notation for the outside boundary of a Young diagram Y : Also, we use the notation a ∨ b = max(a, b) and a ∧ b = min(a, b) for real numbers a, b. Theorem 1.3. For any finite zero-set Z, the large deviation rate I(α, β, Z) exists. Moreover, there exists a finite set A 0 ⊆ A, independent of α and β, so that The rate I(α, β, Z) as a function of (α, β) is continuous, piecewise linear, nonincreasing in both arguments, concave when α + β ≤ 1, and I(0, 0, Z) = γ(Z) > I(α, β, Z) unless α = β = 0.
We give explicit formulae for I(α, β, R a,b ) and I(α, α, T θ ) in Sections 5.2 and 5.3. In general, determining an explicit analytical formula for this rate even for a moderately large Z appears to be quite challenging. Figure 1.2a depicts the support of I(·, ·, T θ ) for several values of θ, and Figure 1.2b shows the function I(α, β, R 9,4 ).
(b) The function I(α, β, R 9,4 ). Lighter shades correspond to steeper gradients. It is clear that both γ and I increase if Z is enlarged, so it is natural to ask how they behave for large Z. Theorem 1.1 suggests that γ(Z)/|Z| might converge, and this is indeed true with the proper definition of convergence of Z, which we now formulate.
We define a Euclidean zeroset, or a continuous Young diagram, Z to be a closed subset of R 2 + such that (a, b) ∈ Z implies R a,b ⊆ Z, and such that Z is the closure of Z ∩ (0, ∞) 2 . For Euclidean zero-sets Z n and Z, we say that the sequence Z n E-converges to Z, Z n Observe that, for a (discrete) zero-set Z, square(Z) is a Euclidean zero-set. Convergence of a sequence Z n of zero-sets will mean convergence to some limit Z of their properly scaled square representations. We note that we do not assume that Z is bounded; in fact, unbounded continuous Young diagrams with finite area arise as a limit of a random selection of discrete ones; see Section 8.
Next, we state our main convergence theorem, which provides the properly scaled limits for γ, I, and another extremal quantity that we now introduce. Call a set A ⊆ Z 2 + thin if every point x ∈ A has no other points of A either on the vertical line through x or on the horizontal line through x. We denote by γ thin (Z) the cardinality of the smallest thin spanning set for Z. Theorem 1.4. There exist functions I(α, β, Z), γ( Z) = I(0, 0, Z), and γ thin ( Z) defined on Euclidean zero-sets Z and (α, β) ∈ [0, 1] 2 so that the following holds.
Assume that Z n is a sequence of discrete zero-sets and δ n > 0 is a sequence of numbers such that δ n → 0 and δ n square(Z n ) If area( Z) = ∞, then I(·, ·, Z) ≡ ∞ on [0, 1) 2 and γ thin ( Z) = ∞. If area( Z) < ∞, then the following holds: I(·, ·, Z) is finite, concave and continuous on The function γ can be defined through a natural Euclidean counterpart of the growth dynamics, replacing cardinality of two-dimensional discrete sets with area and cardinality of onedimensional ones with length. However, if we attempt such a naive definition for I, we get zero unless α = β = 0 because Euclidean sets can have projection lengths much larger than their areas. In fact, to properly define I, we need to understand the design of optimal sets for large Z. Roughly, such sets are unions of two parts: a thick "core" that contributes very little to the entropy, and thin high-entropy tentacles. The resulting variational characterization of I when Z is bounded is given by the formula (6.3). We proceed to give more information on I, starting with the general bounds. Theorem 1.5. For a Euclidean zero-set Z with finite area, and (α, β) ∈ [0, 1] 2 , and The lower bound (1.6) is sharp: it is attained for all α and β if and only if Z = R a,b for some a, b > 0 (Corollary 7.1). The upper bound (1.7) is almost certainly not sharp as it equals the trivial bound γ( Z) on a large portion of [0, 1] 2 . To what extent it can be improved is an interesting open problem, which we clarify, to some extent, by investigating the behavior of I near the corners of the unit square. Theorem 1.6. For any Euclidean zero-set Z with finite area, Moreover, the following holds for the supremum over Euclidean zero-sets Z with finite area: Note that (1.10) says that the slopes of the supremum are 0 at α = 0 and −2 at α = 1. These match the slopes of the two expessions involving γ in the upper bound (1.7), while the expression involving area has the correct slope at (1, 0) due to (1.8). Therefore no linear improvement of (1.7) is possible near the corners on the square. We obtain (1.10), which in particular implies that γ( Z) and γ thin ( Z) are not always equal, by analyzing L-shaped zero-sets with long arms. The proof of all parts of Theorem 1.6 again relies on providing a lot of information about the design of the optimal spanning sets, which turn out to be very thick near (0, 0) and very thin near (1,0) and (1,1).
We conclude with a brief outline of the rest of the paper. In Section 2, we prove some preliminary results and discuss lower bounds on γ for small Z and for small perturbations of large Z. In Sections 3.1 and 3.2 we analyze smallest spanning sets, providing proofs of Theorems 1.1 and 1.2. In Section 4.1 we prove (1.1), and in Section 4.2 we prove general upper and lower bounds on the large deviation rate; we then complete the proof of Theorem 1.3 in Section 5.1. In Sections 5.2 and 5.3 we provide derivations for the two cases for which the large deviation rate I is known exactly. In Section 6 we introduce Hamming neighborhood growth on the continuous plane and prove Theorem 1.4, which is completed in Section 6.5. Sections 7.1-7.4 contain proofs of Theorem 1.5 (completed in Section 7.1) and Theorem 1.6 (completed in Section 7.4) and give some related results on I for large Z. We conclude with an application of limiting shape results for randomly selected Young diagrams in Section 8, and with a selection of open problems in Section 9.

The pattern-inclusion growth
The neighborhood growth rules defined in Section 1 are part of a much larger class of patterninclusion dynamics, which we define in this section. Our reason to do so is not an attempt to develop a comprehensive theory in this general setting, but rather because we need Theorem 2.2 in the proof of Theorem 1.3.
Any process that takes advantage of the connectivity of the Hamming plane will have long range of interaction, so locality, as in cellular automata growth dynamics [Gra], is out of the question, but we retain some of its flavor by the property (G4) below. Again, we assume that the growth takes place on the vertex set Z 2 + .
A growth transformation is a map T : 2 Z 2 + → 2 Z 2 + with the following properties: (G3) permutation invariance: T commutes with any permutation of rows and any permutation of columns of Z 2 + ; and (G4) finite inducement: there exists a number K, so that for any A ⊆ V and x ∈ T (A) there exists a set A ⊆ A, such that |A | ≤ K and x ∈ T (A ).
A growth dynamics starting from the initially occupied set A is defined as in the Section 1 by It follows from (G4) that A ∞ is always inert. As for the neighborhood growth, we say that A spans if T ∞ (A) = Z 2 + . This notion leads to another property of T : (G5) voracity: there exists a finite set A ⊆ Z 2 + that spans.
Example 2.1. If T is the neighborhood growth with Z consisting of the nonnegative x-and y-axis, then and T fails voracity as no A with an empty (horizontal or vertical) line spans.
A pattern is a finite subset of Z 2 + . Two patterns are equivalent if the rows and columns of Z 2 + can be permuted to transform one into the other, and 0-equivalent if they could be so permuted while keeping the 0th row and 0th column fixed. We say that A ⊆ Z 2 + contains a pattern P if there exist permutations σ h and σ v of rows and columns of Z 2 + to obtain a set A such that that P ⊆ A . Moreover, we say that a pattern is observed by the origin 0 = (0, 0) in A if there exist such permutations σ h and σ v , which also fix 0.
There is a bijection between growth transformations T and finite sets of patterns P with the following properties: (P1) {0} ∈ P; and (P2) no pattern in P is 0-equivalent to a subset of another pattern in P.
We consider sets P 1 and P 2 of patterns equivalent if they have the same elements up to 0equivalence.
For a set of patterns P that satisfies (P1-2), we call the transformation T = T P which commutes with any transposition of rows and any transposition of columns and satisfies (2.1) 0 ∈ T (A) if and only if there exists a pattern P ∈ P, observed by 0 in A, a pattern-inclusion transformation. Observe that T P is uniquely defined by the equivalence class of P.
Theorem 2.2. A composition of two growth transformations is a growth transformation. Moreover, any map T : 2 Z 2 + → 2 Z 2 + is a growth transformation if and only if it is a pattern inclusion transformation.
Proof. The first statement is easy to check by (G1-4). To prove the second statement assume first that T is a growth transformation. Then gather all inclusion-minimal sets A that result in 0 ∈ T (A); there are finitely many 0-equivalence classes of them by (G4), and so we can collect one pattern per 0-equivalence class to form P. The converse statement is again easy to check by definition.
We now formally state the connection to the neighborhood growth. Proposition 2.3. A neighborhood growth transformation is characterized by a set P of patterns that are included in the two lines through 0. It is voracious if and only if its zero-set Z is finite.
We omit the simple proof of this proposition. From now on, we will assume that all zero-sets are finite.
We end this section with an example that show that (G4) is indeed a necessary assumption if we want the set P to be finite (which is in turn a crucial property for our application).
Example 2.4. We give an example of a dynamics given by (2.1) with an infinite set P of finite patterns that satisfies (G1)-(G3) and (G5), but not (G4). Define P to comprise {0} and the following patterns (Here, we denote by × a point in the pattern.) No pattern above is 0-equivalent to a subset of another, and a 2 by 1 rectangle of occupied sites spans.

Perturbations of Z
In this section, we prove some results on the effects that small perturbations to a zero-set Z have on the spanning sets. We start with some notation.
Fix a zero-set Z and an integer k ≥ 1. We define the following two Young diagrams, obtained by deleting the k largest (bottom) rows (resp., columns) of Z, Then we let which is the set comprised of the k longest rows and columns of Z. Suppose A ⊆ Z 2 + , and let denote the set of points in A that lie in either a row or a column with at least k other points of A. For example, A >1 is the set of non-isolated points in A. The next two lemmas let us identify low-entropy spanning sets for perturbations of Z.
Lemma 2.5. If A spans for Z, then A >k spans for Z k .
since the vertices removed from A to form A >k are on both horizontal and vertical lines with at most k vertices of A. Therefore, if T and T k are the respective growth transformations corresponding to Z and Z k , then Lemma 2.6. Let A ⊆ Z 2 + and k be a nonnegative integer. Then Proof. Each point in A >k shares a line with at least k other points in A >k , and we use this fact to subdivide A >k into three disjoint sets. Let Thus every point of A h shares a row with at least k other points of A >k , and therefore with at least k other points of A h . Moreover, let A 0 be the set of points that are not in A h but share a column with at least one point in Indeed, x shares a column with at least k other points of A >k , but none of the points in this column can be in A h (as otherwise x would be in A 0 ) or in A 0 (as every point that shares a column with a point in A 0 is itself in A 0 ).

Each nonempty row in
This completes the proof.
Next, we give a perturbation result that addresses removal of the shortest lines from Z. In particular, we conclude that this operation cannot decrease γ by more than the number of removed sites. To put the result in perspective, we note that it is not true that γ decreases by at most k if we remove any k sites. For the simplest counterexample, observe that γ(R 2,2 ) = 4 (use Proposition 2.9 below or note that, with 3 initially occupied points, no point is added after time 1) but γ(R 2,2 \ {(1, 1)}) = 2 (as any pair of non-collinear points spans).
Theorem 2.7. Let Z be any zero-set. Suppose A spans for Z ∩ R a,b , then there exists A ⊇ A , which spans for Z and is such that Furthermore, if A is thin, then A can be made thin as well. Therefore, for any Z and a, b ∈ Proof. We may assume that a = ∞ and that Z \ R ∞,b consists of a single row, the topmost (shortest) row of Z, of cardinality k; we then iterate to obtain the general result. Let A be a spanning set for the dynamics T with zero-set Z = Z ∩ R ∞,b . We will construct a set A ⊇ A of cardinality |A | + k that spans for Z.
Order Z 2 + in an arbitrary fashion. Slow down the T -dynamics by occupying a single site at each time step, the first site in the order that can be occupied, with one exception: when a vertical line contains enough sites to become completely occupied under the standard synchronous rule, make it completely occupied at the next time step.
Mark vertices that are made occupied one-at-a-time according to the ordering on Z 2 + in red, and vertices that are made occupied by completing a vertical line in black. Let L 1 , . . . , L k be the first k vertical lines in the slowed-down dynamics for T that become occupied; say that L k becomes occupied at time t. Choose k black sites, one on each of the k lines, and adjoin them to A to form the set A (if A is thin, choose these black points so that no two share a row with each other or with any points of A , then A is also thin). Define the slowed-down version of T started from A so that it only tries to occupy the site, or sites, occupied by the T -dynamics. We claim that, up to t, such dynamics occupies every site that T does from A . Indeed, the only possible problem arises when a line in T -dynamics from A contains b occupied sites and fills in the next step, and then the T -dynamics from A does the same by construction. After time t, k vertical lines are occupied and thus the horizontal count of any site is at least k and the two dynamics agree.

The enhanced neighborhood growth
We will need another useful generalization of the neighborhood growth, which will play a key role in the proof of Theorem 1.4. In this section we only give its definition, as it will be encountered in the proof of Theorem 2.8. We postpone a more detailed study until Section 6.1.
The enhancements f = (f 0 , f 1 , . . .) ∈ Z ∞ + and g = (g 0 , g 1 , . . .) ∈ Z ∞ + are sequences of positive integers. These increase horizontal and vertical counts, respectively, by fixed amounts. The enhanced neighborhood growth is then given by the triple (Z, f , g), which determines the transformation T as follows: The usual neighborhood growth given by Z is the same as its enhancement given by (Z, 0, 0), and we will not distinguish between the two.

Completion time
Started from any finite set, the neighborhood growth clearly reaches its final state in a finite number of steps. We will now show that in fact this is true for any initial set, and that the number of steps depends only on Z.
Proof. We will prove the theorem for the more general enhanced neighborhood growth dynamics given by (Z, h, 0), for some horizontal enhancement h = (h 0 , h 1 , . . .) ∈ Z ∞ + , also proving that T max does not depend on h.
We prove this by induction on the number of lines in Z. If Z = ∅, then clearly the dynamics is done in a single step. Now take an arbitrary Z whose longest row contains a sites and fix an h. First suppose the initial set A has a row count of at least a on some horizontal line (the x-axis, say). (We emphasize that all counts include the numbers from the enhancement sequence.) Then in one step, all points on the x-axis become occupied. If we let A be the set formed by running the dynamics for one step, and let A = A \ {(x, 0) : x ∈ Z + }, then the dynamics given by (Z, h, 0) started from A coincides with the dynamics given by (Z ↓1 , (0, h 1 , h 2 , . . .), 0) started from A (except on the x-axis, which no longer has any effect on the running time). By the induction hypothesis, in this case the original dynamics started from A therefore terminates in at most T max (Z ↓1 ) + 1 steps.
Fix an integer k < a, and assume now that the initial set A has a row count of k on some horizontal line, and every horizontal line has a row count of at most k. Let t 0 be the first time at which there is a horizontal line with (at least) k + 1 occupied sites.
Let L be any horizontal line with k occupied sites at time 0. Assume without loss of generality that L is the x-axis and that [0, k − 1 − h 0 ] × {0} are the sites occupied on L at time 0. No site above [k − h 0 , ∞) × {0} becomes occupied before time t 0 ; if it did, the site below it on the x-axis would become occupied at the same time. Thus the dynamics above [0, k − 1 − h 0 ] × {0} behaves like the dynamics with zero-set Z ↓1 , and a different horizontal enhancement sequence f , which takes into account the contributions of occupied sites outside of [0, k − 1 − h 0 ] × [1, ∞) to the row counts. By the induction hypothesis, these dynamics terminate by some time dependent only on Z ↓1 . Therefore, either t 0 ≤ T max (Z ↓1 ) + 1 or t 0 = ∞. In the latter case, the original (Z, h, 0)-dynamics terminate by time T max (Z ↓1 ), so we can assume t 0 ≤ T max (Z ↓1 ) + 1.
Assume that a = a 0 ≥ a 1 ≥ . . . a k > 0 are the rows of Z. The arguments above imply that which ends the proof.

The line growth bound
The first result on the smallest spanning sets on the Hamming plane was this simple formula about line growth from [BBLN].
Proof. See Section 1 of [BBLN] for a simple inductive proof, or Theorem 5.1. Proof. This follows from Proposition 2.9, and the fact that Z ⊆ Z implies γ(Z ) ≤ γ(Z).
We call the bound in Corollary 2.10 the line growth bound . It is somewhat surprising that the inequality is, in fact, in many cases equality. For example, it is equality for bootstrap percolation with arbitrary θ (which follows from Proposition 5.6) and when the Z is a union of two rectangles (a special case of a more general result from [CGP]). On the other hand, it easily follows from Theorem 1.1 that the line growth bound can be, in general, very far from equality when Z is large. In this section we give a general lower bound on γ that tends to work better for small Z; in particular, it proves that in general equality does not hold when Z is a symmetric zero set which is the union of three rectangles.
Theorem 2.11. For any choice of a comparison rectangle R a,b ⊆ Z and a Young diagram Proof. Order the lines of Z 2 + in an arbitrary fashion. Assume A is a finite spanning set for Z. We will construct a finite sequence S of lines (dependent on A), by a recursive specification of sequences S i of i lines.
Consider the line growth T with zero-set R a,b . Note that A spans for the growth dynamics T ; we now consider a slowed-down version. Let A 0 = A and S 0 the empty sequence. Given the sequence S i , i ≥ 0, A i is the union of A and all lines in S i . Assume S i consists of k vertical and horizontal lines, with k + = i.
If (k, ) ∈ Y , examine lines of Z 2 + in order until a line L is found on which T (A i ) adds a point and thus immediately makes it fully occupied (since T is a line growth). Adjoin L to the end of the sequnce S i to obtain S i+1 . If L is horizontal (resp. vertical), define its mass to be a − k > 0 (resp. b − > 0). The mass of L is a lower bound on the number of points in A ∩ L that are not on any of the preceding lines in the sequence.
∈ Y , the sequence stops, that is, S = S i . As we add only one line to the sequence each time, the final counts k and of vertical and horizontal lines satisfy (k, ) ∈ ∂ o Y . Let m h and m v be the respective final masses of the horizontal and vertical lines.
The key step in this proof is the observation that total mass m h + m v only depends on k and and not on the positions of vertical and horizontal lines in the sequence. Indeed, if L is followed by L in S, and the two lines are of different type, and a new sequence is formed by swapping L and L , the mass of L increases by 1, while the mass of L decreases by 1. Thus the total mass can be obtained by starting with all vertical lines: For a possible sequence S of lines, let γ S be the minimal size of a set that spans (for Z) and generates the sequence S. Then, simultaneously, Now we add the two inequalities of (2.3) and use (2.2) to get Finally, we observe that γ(Z) = min{γ S : S a possible sequence} to end the proof.
Proof. We use the comparison square R a+b,a+b , and We are free to choose i; if a ≤ b, then the optimal choice is i = 2a, otherwise it is i = a + b, which gives the desired inequality.
3 Smallest spanning sets

Proof of Theorem 1.1
The steps in the proof of Theorem 1.1 are given in the next three lemmas. The first one demonstrates that when the initial set A 0 is itself a Young diagram, the growth dynamics are very simple.
To prove the lower bound in Theorem 1.1 we consider the case where the initial set is a union of two translated Young diagrams. To be more precise, we say that where Y 1 and Y 2 are Young diagrams, y 1 , y 2 ∈ Z 2 + , and no line intersects both (y 1 + Y 1 ) and (y 2 + Y 2 ).
Proof. Our proof will be by induction on the number of horizontal lines that intersect Z. If this number is 0, the claim is trivial. Otherwise, let a 0 > 0 be the number of sites on the largest (i.e., bottom) line of Z. Observe that the initial set consiting of a 0 − 1 vertical lines is inert.
Further, let h 0 and k 0 be the respective numbers of sites on bottom lines for Y 1 and Y 2 . Then h 0 + k 0 ≥ a 0 , as otherwise A 0 would be covered by a 0 − 1 vertical lines. Therefore either h 0 ≥ 1 2 a 0 or k 0 ≥ 1 2 a 0 ; without loss of generality we assume the latter.
, and Z = Z ↓1 . By making the horizontal line that contains k 0 sites of y 2 + Y 2 occupied in the original configuration A 0 , we see that A 0 spans for the dynamics with zero-set Z . By the induction hypothesis, |A 0 | ≥ 1 2 |Z |, and then Lemma 3.3. Assume A 0 spans. Then there exists a two-Y set A 0 , which spans and has |A 0 | = 2|A 0 |.

Remark 3.4.
A similar proof to the one below also shows that there exists a thin set A 0 , which spans and has |A 0 | = 2|A 0 |.
Permute the columns of A 0 so that the column counts are in nonincreasing order, then permute the rows of A 0 so that the row counts are in nonincreasing order; in the sequel we refer to this set as A 0 , as it clearly spans if and only if the original set spans. Fix a vertical line L intersecting R , containing k > 0 sites of A 0 . Create a contiguous interval of k occupied sites on L just above L ∩ R (in particular, outside R ). Perform this operation for all vertical lines, and note that the resulting set forms a Young diagram. Also perform an analogous operation for the horizontal lines, adding sites just to the right of R . Finally, erase all the sites inside R to define A 0 . Clearly, |A 0 | = 2|A 0 |, and A 0 is a two-Y set. To see that A 0 spans, it is enough to show that it eventually occupies every point in R \A 0 ⊇ R \ R.
Assume, in this paragraph, that the initial set is A 0 ⊆ R . We claim that, if a point x / ∈ R gets occupied at any time t, then any line through x that intersects R is fully occupied. This is proved by induction on t. The claim is trivially true at t = 0, and assume it holds at time Then, by the induction hypothesis, any y ∈ L h (x) has vertical and horizontal counts at time t at least as large as those of x and thus also becomes occupied. An analogous statement holds if L v (x) ∩ R = ∅. This proves the claim, which implies that no site outside R ever helps in occupying a site in R .
Due to the argument in the previous paragraph, we may only allow the dynamics from both A 0 and A 0 to occupy sites within the rectangle R .
We now claim, and will again show by induction on time t ≥ 0, that every site in R \ A 0 occupied at time t starting from A 0 is also occupied starting from A 0 . This claim is trivially By an analogous argument, the same inequality holds if L is a vertical line.
which implies z ∈ T t (A 0 ). This establishes the induction step and ends the proof.
Proof of Theorem 1.1. The upper bound is an obvious consequence of Lemma 3.1, while the lower bound follows from Lemmas 3.2 and 3.3.

Proof of Theorem 1.2
Theorem 1.2 is an immediate consequence of the following result.
Then there exists a set B ⊆ R a,b that spans and has |B| ≤ |A|.
Proof of Theorem 3.5. Assume that A ⊆ R M,N is a finite set that spans and M > a, N ≥ b. We claim that there is a set B ⊆ R M −1,N that also spans and |B| ≤ |A|. Without loss of generality, we will restrict our dynamics to the rectangle R M,N throughout the proof.
We may assume that all row and column occupancy counts satisfy be the smallest of the column counts. We prove our claim by induction on k. If k = 0, the claim is trivial.
We now prove the induction step. Assume k > 0 and that the rightmost column in R M,N contains exactly k occupied points, that is, We define the time T to be the first time in the dynamics at which a point, (M −1, j 0 ) say, on the last column becomes occupied and there exists an unoccupied point First consider the case T = ∞. Then every time a point x in the column L v (M − 1, 0) becomes occupied, the entire row L h (x) ∩ R M −1,N also becomes occupied. Therefore, apart from the initially occupied points in L v (M − 1, 0), this column plays no role in the dynamics within R M −1,N . Thus, each initially occupied point z ∈ L v (M −1, 0) can be moved to an initially unoccupied location on the same row L h (z) ∩ R M −1,N . Such unoccupied locations exist since we assumed M > a and all row occupancy counts are at most a. Furthermore, the resulting initial configuration eventually fills the box R M −1,N , which spans. Now consider the case T < ∞, and consider the configuration X = T T −1 (A). Let J be the collection of row indices j for which the j th row is fully occupied in X (|L h (0, j) ∩ X| = M ), and (M − 1, j) / ∈ A. We will now build a new initially occupied set A 1 (see Figure 3.2 for guidance on this construction). First, consider the points in the i th 0 column that are occupied in A, but not on any of the rows with indices in J. Populate the last column (M − 1) of A 1 with these points, keeping their rows the same. Next, consider the points on the last column of A, and populate the i th 0 column of A 1 with these points, again keeping their rows the same, in addition to the points in the i th 0 column of A that lie on the rows indexed by J ({(i 0 , j) ∈ A : j ∈ J}). Finally, let A 1 agree with A outside of the columns i 0 and M − 1.
Note that A 1 has strictly fewer than k occupied points on the last column, M − 1. This is because, in the configuration X, the column i 0 has strictly fewer occupied points than the last column. This also implies that T ≥ 2 and J = ∅, since the column i 0 started with at least as many occupied points in A as the last column. The induction step will be completed, provided we show that A 1 spans. Through time T − 1, every point in the smaller box R M −1,N that becomes occupied by the dynamics from initial set A, also becomes occupied by the dynamics from initial set A 1 . That is, This is because first, the row occupancy counts are the same in A 1 and A, and the column occupancy counts in R M −1,N are larger for A 1 than for A, and second, by the definition of T , the points that become occupied in the last column M − 1 do not affect either dynamics (from  Throughout this section α ≥ 0 and β ≥ 0 are fixed parameters. We also fix a finite zero-set Z.
We remark that the large deviation setting makes sense for arbitrary growth transformation, not just for neighborhood growth. However, the key step in the proof of existence, Theorem 2.8, is not available for the more general dynamics.
We recall the setting and notation before the statement of Theorem 1.3. We will establish parts of this theorem in this and the next section.
Theorem 4.1. The large deviation rate I(α, β) = I(α, β, Z) exists. Moreover, First we will prove the following lemma for large deviations of the containment of specific patterns, which follows the methods for containment of small subgraphs in Erdős-Rényi random graphs, as presented in [JLR]. Throughout the rest the paper, ω 0 will denote the initial configuration obtained by occupying every point in R N,M independently with probability p.
Proof. For any subpattern B ⊆ A, the probability that ω 0 contains B is at most where C B is a constant that accounts for the number of ways to reorder the rows and columns of B. This gives the lower bound For every subset X ⊆ Z 2 + that is equivalent to A (in the sense of a pattern) let I X be the indicator of the event that X ⊆ ω 0 , and let X A denote the equivalence of X and A. Below, X, Y, Z will denote subsets of Z 2 + . Define Theorem 2.18 of [JLR] states that Observe that (1) ≤ Cλ 2 p −ρ(α,β,A)+o (1) .
This gives the upper bound, Proof of Theorem 4.1. Lemma 4.2 directly implies that Assume now that Span happens. Let T = T Tmax , where T max is defined in Theorem 2.8. By Theorem 2.2, T is a pattern-inclusion transformation given by a set of patterns P. Let A 0 be the set of patterns in P that contain no site in the neighborhood of the origin 0. Observe that every set in A 0 spans, that is, A 0 ⊆ A. Note also that A 0 = ∅, which simply follows from the fact that there exists a finite set that spans.
Let G be the event that there exists an x ∈ R N,M whose entire neighborhood is unoccupied Assume without loss of generality that M ≤ N , which implies β ≤ α. Assume first that α < 1. Then for small enough p. Together, (4.7) and (4.8) imply (4.9) P p (Span) ≤ P p (ω 0 contains a member of A 0 ) + P p (G c ) We now consider the case α ≥ 1. For a k ≥ 1, let A k be the pattern The number of rows is k, and each interval of occupied sites has length k. For any fixed k and > 0, Clearly, if k is large enough, A k spans (in two time steps). Add A k to A 0 . Then, by Lemma 4.2 and (4.11), Thus, when α ≥ 1, (4.12) trivially implies (4.10). The inequality (4.10) is therefore always valid, and, together with (4.6), gives the desired equalities.

General bounds on the large deviations rate
Having established the existence of I(α, β, Z), we now give three general bounds. These will be used to establish continuity of I(α, β, Z) in Section 6.5, and are the key components for the proof of Theorem 1.5 in Section 7.1. Assume throughout this section that (α, β) ∈ [0, 1] 2 .
Proposition 4.3. For any zero-set Z and nonnegative integer k, Proof. Let A be a spanning set for Z. Then, by Lemma 2.6, By Lemma 2.5, A >k spans for Z k , thus |A >k | ≥ γ(Z k ). Therefore, Moreover, A >k is a subset of A, so I(α, β, Z) ≥ ρ(α, β, A) ≥ ρ(α, β, A >k ), and the desired inequality follows. Proof. For a set A ⊆ Z 2 + of occupied points, let A r ⊆ Z 2 + be a set such that each row in A r contains the same number of occupied sites as the row in A, but the columns of A r contain at most one occupied site. Define A c analogously. These sets satisfy For a Young diagram Z both Z r and Z c span: the longest row of Z r immediately occupies its entire horizontal line, then the next longest does the same, and so on. Moreover, for any subset B ⊆ Z r , |B| = |π x (B)| and hence The desired inequality (4.14) follows.
Proposition 4.5. For any discrete zero-set Z, Proof. Suppose the set A spans for Z, has size |A| = γ(Z), and A ⊆ R a,b for some a, b. Recall the definition of A r and A c from the previous proof. The key step in proving the upper bound (4.15) is to show that the set A s defined by spans for Z as well. The proof of this is similar to the proof of Lemma 3.3, so we only provide a brief sketch. Restrict the dynamics to the larger rectangle R 2a,2b . Then prove by induction that, for every site x ∈ R 2a,2b \ A and every t > 0, the number of occupied sites in T t (A s ), in both the row and the column containing x, will be at least as large as the number of occupied sites in the same row and column in T t (A). Therefore, for some t > 0, (a, b) + R a,b will be contained in T t (A s ). As R a,b spans, therefore so does A s .
Since A s spans, an upper bound on ρ(α, β, A s ) will also provide an upper bound on I(α, β, Z).

Support
In this section, we conclude the proof of our main large deviations theorem; the most substantial remaining step is an argument for the support formula (1.2) for a general zero-set Z.
Proof of Theorem 1.3. The existence of I and its variational characterization (1.1) follow from Theorem 4.1. Then, for every A, ρ(·, ·, A) is continuous and piecewise linear, so by (1.1) the same is true for I(·, ·, Z). Monotonicity in α and in β follows from the definition.
so I is the minimum of linear functions, thus concave.
It remains to prove the claims about the support of I. By continuity of I(·, ·, Z), we can assume (α, β) The event Span implies that for some (u, v) ∈ ∂ o Z there exists a vertex x ∈ V such that row(x, ω 0 ) ≥ u and col(x, ω 0 ) ≥ v, and the probability of this event (for a given (u, v)) is bounded above by the minimum of the expected number of rows with u initially occupied vertices and the expected number of columns with v initially occupied vertices. Therefore, so I(α, β, Z) ≥ , and (α, β) ∈ supp I(·, ·, Z).
, let E denote the event that there are at least K rows with at least u 0 initially occupied vertices, and let F denote the event that there are at least K columns with at least v 0 initially occupied vertices. Observe that E ∩ F ⊆ Span. We will show P p (E) ∧ P p (F ) → 1, so and I(α, β, Z) = 0.
We will show P p (E) → 1, and the argument for F is similar. If α ≥ 1, then the probability that a fixed row has at least u 0 initially occupied vertices is at least p o (1) , so the expected number of rows with at least u 0 initially occupied vertices is at least p −β+o(1) → ∞. If α < 1 and u 0 (1 − α) − β < 0, then the expected number of rows with at least u 0 initially occupied vertices is at least In either case, since rows are independent, this implies P p (E) → 1.

Large deviations for line growth
In the next theorem, we explicitly give the large deviation rate for line growth with Z = R a,b , where a, b ≥ 0. When α = β and a = b, the rate is given in [BBLN] by a different method. For α, β ∈ [0, 1), we let Theorem 5.1. Fix α, β ∈ [0, 1). If either b ≤ ∆b or a ≤ ∆a, then I(α, β, R a,b ) = 0. Assume b > ∆b and a > ∆a for the rest of this statement. If β ≤ α and holds, then If β ≤ α and (5.2) does not hold, If β ≥ α, the rate is determined by the equation I(α, β, R a,b ) = I(β, α, R b,a ).
Theorem 5.1 implies the asymptotic result below. As we will see in Section 7.1, (5.5) implies that the line growth achieves the lower bound (1.6), thus is in this sense the most efficient neighborhood growth dynamics.
Corollary 5.2. If α, β ∈ [0, 1] are fixed and min{a, b} → ∞, Proof of Corollary 5.2. This follows from (5.3) and (5.4), which show that the difference between the two sides of (5.5) is an affine function of a and b.
We shorten I(a, b) = I(α, β, R a,b ) for the rest of this section. We begin the proof of Theorem 5.1 with a recursive formula for I(a, b).
Proof. Let H a be the event that there is a row with at least a initially occupied points, and V b be the event that there is a column with at least b initially occupied points. Also, let Span x,y be the event that ω 0 spans for Z = R x,y . Then, where • denotes disjoint occurrence. By the BK inequality and Markov's inequality, which implies the lower bound on I(a, b). For the upper bound, observe that the density p initial set ω 0 dominates the union of two independent initial sets, ω 1 0 , ω 2 0 , each with density p/2. Also, note that the probability of a fixed column being empty (and so not participating in the event Span a−1,b ) in the initial configuration ω 2 0 is at least 1 − M p/2 ≥ 1/2 for small p (likewise for rows). Furthermore, for small enough p and likewise for H a . Therefore, for small enough p,

This gives the upper bound on I(a, b).
Let Thus, h 0 is the smallest number of fully occupied rows that make the probability of spanning of a fixed column at least p o(1) (as p → 0), and v 0 is the analogous quantity for column occupation.
We now define a set S of finite sequences, denoted by S = (S 1 , S 2 , . . . , S K ). By convention, we let S consist only of the empty sequence when either h 0 = 0 or v 0 = 0. Otherwise, S consists of sequences S of length K ≤ h 0 + v 0 − 1, with each coordinate S i ∈ {H, V }, and the following property. Let h i = h i ( S) and v i = v i ( S) be the respective numbers of Hs and V s in Every sequence represents a way to build a spanning configuration for the line growth with Z = R a,b . We define the weight of S ∈ S as Lemma 5.4. For all a, b ≥ 0, Proof. It is clear that the statement holds if either a = 0 or b = 0, where S consists only of the empty sequence and I(a, b) = 0. It is also straightforward to check by induction that the right-hand side satisfies the same recursion as the one for I(a, b) given in Lemma 5.3.
Next, we look at the effect of a single transposition of H and T to the weight of S. Fix an i ≤ K − 2 so that S i = H, S i+1 = V , and denote S HV = S. Let S V H be the sequence obtained from S by transposing H and V at i and i + 1. Note that S V H ∈ S by the restriction on i. The following lemma is a simple observation. It is an immediate consequence of Lemma 5.5 that we only need to look for minimizers among sequences It is also clear from (5.6) that the weight is in each case a linear function of v or h and thus the minimum is achieved at an endpoint. This already gives the formula for I as a minimum of 8 expressions, which we simplify in the proof below.
Proof of Theorem 5.1. We will assume h 0 ≥ 1 and v 0 ≥ 1. We will also assume that α ≥ β, as otherwise we obtain the result by exchanging α and β and a and b. Therefore, by Lemma 5.5, the minimizing sequence in Lemma 5.4 must be have one of two forms: ; v = 0 is the optimal choice when (5.2) does not hold. This, after some algebra, gives (5.3) and (5.4).

Large deviations for bootstrap percolation
As a second special case, we compute the large deviation rate for bootstrap percolation when α = β.
A consequence of Proposition 5.6 is that bootstrap percolation also achieves the lower bound (1.6), at least along the diagonal α = β.

Euclidean limit of neighborhood growth
The main aim of this section is the proof of Theorem 1.4, which we complete in Section 6.5. As remarked in the Introduction, we need substantial information on the design of optimal spanning sets for I(α, β, Z) when Z is large. This is given in Section 6.1, where we show that for large Z, I(α, β, Z) is well approximated by another extremal quantity that has a much more transparent continuum limit. This limiting quantity is defined in Section 6.2, and the convergence is proved in Section 6.3. An analogous treatment for γ thin is sketched in Section 6.4. The proof of Theorem 1.4 is concluded in Section 6.5.

The enhancement rate
Recall, from Section 2.3, the enhanced neighborhood growth given by a zero-set Z and the enhancements f = (f 0 , f 1 , . . .) and g = (g 0 , g 1 , . . .). From now on, we assume that f and g are nondecreasing sequences with finite support. It will also be convenient (especially in Section 6.2) to represent f and g as Young diagrams F and G, whereby f i is the ith row count in the digram F , and g i is the ith column count in the diagram G.
Let I be the set of triples (A, f , g), with f and g as above and A a finite set that spans for (Z, f , g). We define the enhancement rate I by Observe that the elements of the above set are linear combinations of three nonnegative integers, with fixed nonnegative coefficients 1, 1 − α, 1 − β, so its minimum indeed exists.
We start with two preliminary results on I that hold for arbitrary Z. Proof. Clearly, I(0, 0, Z) ≤ γ(Z), as γ is obtained as a minimum over a smaller set (with zero enhancements). On the other hand, assume that A is a finite set that spans for (Z, f , g), with I(0, 0, Z) = |A| + f + g. Then we can form a set A = A ∪ Y 1 ∪ Y 2 , such that Y 1 and Y 2 are, respectively, horizontal and vertical translates of corresponding Young diagrams F and G so that no horizontal line intersects both F ∪ A and G, and no vertical line intersects both G ∪ A and F . Using a similar argument as in the proof of Lemma 3.3, A spans for Z and so γ(Z) ≤ |A | = I(0, 0, Z).
For the last claim, assume that, say, β = 1 and observe that ∅ spans for (Z, 0, g) for a suitably chosen g.
For the rest of this subsection, we fix α, β ∈ [0, 1) and suppress the dependency on α and β from the notation. Proof. Pick A, f and g so that A spans for (Z, f , g) and |A| + (1 − α) f + (1 − β) g = I(Z). Create a set A 0 = A ∪ A h ∪ A v so that the union is disjoint, for every integer v ≥ 0, L h (0, v) contains exactly f v sites of A h , that every vertical line contains at most one site of A h , and that analogous conditions hold for A v . Moreover, make sure that no horizontal line intersects both A ∪ A h and A v , and no vertical line intersects both A ∪ A v and A h . Then A 0 spans for Z. Moreover, |A v | = g, |A h | = f . We now find an upper bound for ρ(A 0 ). By dividing any subset of A 0 into three pieces, we get, with the maximum below taken over all sets B ⊆ A, Therefore, as desired.
Finally, we show that, for large Z, I and I are close throughout [0, 1] 2 . The next lemma is, by far, the most substantial step in our convergence argument. Lemma 6.3. Fix a bounded Euclidean zero-set Z. Assume that δ > 0 and discrete zero-sets Z depend on n (a dependence we suppress from the notation), and that δsquare(Z) Assume that positive integers m and k satisfy m 2 , 1 k . Then for some C that depends on Z, α, and β, Proof. Pick a set A that spans for Z, and is such that ρ(A) = I(Z).
Step 1 . Let A = A >k . Then A spans for Z k , and there exists a constant C, which depends on Z, α and β, so that |A | ≤ C 2 .
The spanning claim follows from Lemma 2.5. Moreover, by Lemma 2.6 (as in the proof of Step 2 . There exists a set (2) | A| = |A |; (3) for every horizontal (resp. vertical) line L, (4) A h has at most one point in each column and A v has at most one point in each row; (5) no horizontal line intersects both A d ∪ A h and A v , and no vertical line intersects both (6) A spans for Z k+ C 2 /m ; and Note that |A i d \ A i+1 d | ≥ m, therefore the final i satisfies mi ≤ |A |, which, together with Step 1, gives (6).

We will inductively construct a finite sequence of sets
Step 3 . For A constructed in Step 2, ρ( A) ≤ ρ(A ).
Let φ : A → A be the bijection that is identity on A d , and an appropriate horizontal or vertical translation otherwise (corresponding to the construction of A from A in Step 2). Pick and φ(y) share a column, then so must x and y (by (4) and (5)). Similarly, |π y (B)| ≥ |π y (B )|. Therefore Step 4 .
This follows by the same argument as in the proof of Lemma 2.5.
We may assume, by a rearrangement of rows and columns of A 0 , that these are nonincreasing sequences.
Step 5 . For so defined f and g, A d spans for (Z 1+2k+ C 2 /m , f , g). Moreover, Spanning follows from the fact that A h has at most one point on any vertical line (which follows from (4)), and the analogous fact about A v . To show the inequality, note that |π Step 6 . End of the proof of Lemma 6.3.
as desired.

Definitions of limiting objects and their basic properies
We will assume throughout this section that Z is a bounded Euclidean zero-set. Pick two leftcontinuous nonincreasing functions f, g : [0, ∞) → R with compact support. The enhanced Euclidean neighborhood growth transformation T is determined by the triple ( Z, f, g) and is defined on Borel subsets A of the plane as follows. For a Borel set A ⊆ R 2 + , and x ∈ R 2 + , let Similar to the discrete case, the functions f and g may be represented by continuous Young diagrams F and G, so that f (v) = length(L h (0, v) ∩ F ) and g(u) = length(L v (u, 0) ∩ G). Also as in discrete case, the non-enhanced transformation is given by ( Z, 0, 0) and we assume this version whenever we refer only to Z.
Note T (A) is also Borel for any Borel set A, thus T can be iterated. Also, as Z is a continuous Young diagram, T (A) is well-defined even if A is unbounded and one or both of the lengths are infinite. We say that a Borel set The connection between discrete and continuous transformations is give by the following simple but useful lemma, which says that T is an extension of T in the sense that T and T are conjugate on square representations of discrete sets. Lemma 6.4. Assume A ⊆ Z 2 + , and assume T is given by a discrete zero set Z and enhancing Young diagrams F and G. Let Z = square(Z) be the corresponding Euclidean zero-set and F = square(F ), G = square(G) the corresponding enhancements. Then T (square(A)) = square(T (A)).
Proof. This is straightforward to check.
The Euclidean counterpart of γ has a straightforward definition through the non-enhanced dynamics (6.2) γ( Z) = inf{area(A) : A is a compact subset of R 2 that E-spans for Z}.
To define the counterparts of I and γ thin , let I be the set of triples (A, f, g), where f and g are, as in (6.1), left-continuous nonincreasing functions and A ⊂ R 2 + is a compact set that spans for ( Z, f, g). Then let Lemma 6.5. Fix an a > 0. Then for any α, β ∈ [0, 1] 2 , I(α, β, a Z) = a 2 I(α, β, Z).

Proof.
A set A ⊂ R 2 + spans for ( Z, F , G) if and only if aA spans for (a Z, a F , a G).
Next are three lemmas on non-enhanced growth.
Lemma 6.6. Assume T is given by a Euclidean zero-set Z. Suppose A n ⊆ R 2 + is an increasing sequence of Borel sets and A = ∪ n A n . Then A) and Z is closed, ( row(x, A), col(x, A)) ∈ Z and therefore x / ∈ T (A). This proves the first claim, which implies, for any Borel set A, as desired.
Let δ > 0 be the distance between K and A c . Then every point y ∈ B δ (x) has a translate of K on L h (y) ∩ A (in particular, y + K ⊆ A) and so row(y, A) > row(x, A) − . Similarly, by choosing a possibly smaller δ > 0, col(y, A) > col(x, A) − for all y ∈ B δ (x). Thus, for any y ∈ B δ (x), ( row(y, A), col(y, A)) / ∈ Z, thus B δ (x) ⊆ T (A), and consequently T (A) is open. Lemma 6.8. Assume T is given by a Euclidean zero-set Z and A is a Borel set that includes Z in its interior. Then A E-spans.
Proof. Let A R 2 + be an open set that includes Z. We claim that A cannot be E-inert. To see this, assume that a vertical line L includes a point not in A. Take the lowest closed horizontal line segment bounded by the vertical axis and L that includes a point not in A, then let x = (u, v) be the leftmost point outside A on this segment. Clearly ( row(x, A), col(x, A)) = (u, v) / ∈ Z and therefore x ∈ T (A). Thus A is not E-inert. The proof is concluded by Lemmas 6.6 and 6.7.
Proof. By definition, we may assume that Z is bounded. Then the inequality I(0, 0, Z) ≤ γ( Z) is obvious as γ is obtained as an infimum over a smaller set (with f = g = 0). The reverse inequality can be obtained by replacing the two Young diagram enhancements with the corresponding two initially occupied Young diagrams. We leave out the details, which are very similar to the proof in the discrete case (Lemma 6.1). Corollary 6.10. For any Euclidean zero-set Z, γ( Z) ≤ area( Z). In particular, if area( Z) < ∞, then I(α, β, Z) ≤ γ( Z) < ∞ for all (α, β) ∈ [0, 1] 2 .
Proof. The first claim follows from the definition of γ( Z) and Lemma 6.8. The second claim follows from Lemma 6.9 and monotonicity in α and β.

Euclidean limit for the enhanced growth
In this subsection, we establish the limit for the enhanced rate I. Lemma 6.11. Assume Z is a bounded Euclidean zero-set. Suppose that Euclidean zero-sets Z n and δ n → 0 are such that δ n square(Z n ) E −→ Z as n → ∞. Then Proof. Let ∈ (0, 1). Define the Euclidean zero-set Z n = δ n square(Z n ). For large enough n ≥ N 1 = N 1 ( ), by (C1), Pick a compact set K ⊆ R 2 + , and two continuous Young diagrams F and G so that K E-spans for ( Z, F , G) and with area(K) + (1 − α)area( F ) + (1 − β)area( G) < I( Z) + .
To get an inequality in the opposite direction, assume that n ≥ N 1 and pick a finite set A ⊂ Z 2 + and Young diagrams F and G, such that A spans for (Z n , F, G). Then δ n square(A) is a compact set that, by Lemma 6.4, spans for ( Z n , δ n square(F ), δ n square(G)), and then by (6.5) it also spans for (1 − ) Z. Therefore, By taking infimum over all triples (A, F, G), we get The two inequalities (6.6) and (6.7) end the proof.

The smallest thin sets
Fix a zero-set Z. To prove (1.5), we need a comparison quantity, analogous to I. To this end, we define γ thin (Z) to be the minimum of f + g over all sequences f , g such that ∅ spans for (Z, f , g). We first sketch proofs of a couple of simple comparison lemmas. Lemma 6.12. For any zero-set Z, γ(Z) ≤ γ thin (Z) ≤ 2γ(Z).
Proof. The lower bound is clear as γ thin is the minimum over a smaller set than γ. The upper bound is a simple construction (similar to the one in the proof of Lemma 3.3): one may replace any spanning set A by a thin spanning set consisting of two pieces, one with the row counts the same as those of A, and the other with the column counts the same as those of A. Lemma 6.13. For any zero-set Z, γ thin (Z 1 ) ≤ γ thin (Z) ≤ γ thin (Z).
Proof. This is again a simple construction argument as in Lemma 3.3. If ∅ spans for (Z, f , g), then the thin set A constructed by populating row i with f i occupied points and column i f i + 1 + j with g j occupied points has (6.8) and spans for Z. Conversely, if a thin set A spans for Z, then the row and column counts of A can be gathered into f and g (once sorted), so that (6.8) holds and ∅ spans for (Z 1 , f , g).
Recall the definition of γ thin from Section 6.2. We will omit the proof of the following convergence result, which can be obtained by adapting the argument for enhancement rates. Lemma 6.14. Assume Z is a bounded Euclidean zero-set. Then γ thin ( Z) ≤ area( Z). Moreover, suppose discrete zero-sets Z n and δ n > 0 satisfy δ n → 0 and δ n square(Z n ) E −→ Z. Then δ 2 n γ thin (Z n ) → γ thin ( Z).

Proof of the main convergence theorem
We begin with an extension of Theorem 2.7 that is needed to reduce our argument to bounded Euclidean zero-sets. Lemma 6.15. Let Z be any zero-set, (α, β) ∈ [0, 1] 2 , and R > 0 an integer. Then Proof. Pick a set A that spans for Z ∩ [0, R] 2 , such that ρ(A) = I(α, β, Z ∩ [0, R] 2 ). By Theorem 2.7, there exists a set A 1 with |A 1 | ≤ |Z \ [0, R] 2 |, such that A ∪ A 1 spans for Z. Therefore, with supremum below over all sets B ⊆ A and B 1 ⊆ A 1 , Recall the definition of E-convergence from Section 1. We omit the routine proof of the following lemma. We are now ready to prove our main convergence result, Theorem 1.4. Before we proceed, we need to extend the definitions of I and γ thin to unbounded Euclidean zero-sets. For an arbitrary Z, we define Observe that, if area( Z) < ∞, I( Z) ≤ γ( Z) ≤ area( Z) < ∞, and likewise γ thin ( Z) < ∞.
Fix an ∈ (0, 1). By definition, we can choose R large enough so that (6.11) It follows by Lemma 6.16 that, if R is large enough, δ 2 n |Z n \ [0, δ −1 n R] 2 | < , for all n. Then, by Lemma 6.15, for every n.
, and therefore, by Lemma 6.11, and then, by Lemmas 6.3 and 6.2, By (6.11) and (6.12), it follows that which ends the proof of the convergence claim.
Proof. If area( Z) < ∞ then the argument is similar to the one in the preceding corollary.
Proof of Theorem 1.4. All statements on large deviation rates follow from Lemma 6.17 and Corollary 6.20,and imply (1.4). We omit the similar proof of (1.5).

Bounds on large deviations rates for large zero-sets
In Sections 7.1-7.4 we address bounds on I(α, β, Z). In Section 7.1, we complete the proof of Theorem 1.5. In Sections 7.2,7.3 and 7.4, we prove lower bounds on I near the corners of [0, 1] 2 , either for general Euclidean zero-sets or an L-shaped Euclidean zero-set, which establish Theorem 1.6 and show that each of the three upper bounds on I(α, β, Z) is, in a sense, impossible to improve near one of the corners.

General bounds on I
We assume that (α, β) ∈ [0, 1] 2 . Having established the existence of I, we now recall the three propositions in Section 4.2 and complete the proof of Theorem 1.5.
Proof of Theorem 1.5. Pick a sequence of zero-sets Z n , such that δ n square(Z n ) E −→ Z for some sequence of positive numbers δ n → 0. To prove the lower bound, we use the Proposition 4.3 with any numbers k = k n that satisfy 1 k 1/δ n , so that also δ n square(Z k n ) E −→ Z. To prove the upper bound (1.7), we use the inequalities (4.14), (4.15), and the inequality I(α, β, Z n ) ≤ γ(Z n ) (see Theorem 1.3). We multiply these four inequalities by δ 2 n , take the limit as n → ∞, and use δ 2 n |Z n | → area( Z) (by definition of E-convergence) and Theorem 1.4 to obtain (1.6) and (1.7).
A continuous version of Theorem 5.1 follows.
Corollary 7.1. For any Euclidean rectangle R a,b , Proof. It follows from Theorems 2.9 and 1.4 that γ( R a,b ) = area( R a,b ) = ab, so the upper and lower bounds on I(α, β, R a,b ) given in Theorem 1.5 agree. (Alternatively, one may use Corollary 5.2.) 7.2 The (1, 0) corner Theorem 7.2. Fix a continuous zero-set Z with finite area. Then A consequence of this theorem is a characterization of Euclidean zero-sets which attain the lower bound (1.6). Proof. By Corollary 7.1 and Theorem 7.2, we only need to show that the second statement implies the third. Suppose there do not exist a, b ≥ 0 such that Z = R a,b . Since 0 < area( Z) < ∞, we may choose a, b > 0 such that for some > 0 the boundary of Z intersects R a,b in intervals of length at least > 0 and such that (a − , b − ) + [0, ] 2 ⊂ R a,b \ Z. If T is the growth transformation for the dynamics given by Z ∩ R a,b , then it follows that T which ends the proof.
Proof of Theorem 7.2. We first argue that it is enough to prove (7.1) when Z is bounded. Indeed, once we achieve that, the lim inf in (7.1) is, for any Z and any R > 0, at least area( Z ∩ [0, R] 2 ). The general result then follows by sending R → ∞. We assume that Z is bounded for the rest of the proof.
We fix an α ∈ [0, 1). We also fix , δ > 0, to be chosen to depend on α (and go to 0 as α → 1) later. We assume the discrete zero-sets Z are large, depend on n, and 1 n square(Z) E −→ Z, but for readability we will drop the dependence on n from the notation.
In addition, we fix an integer k ≥ 2 that will also depend on α and increase to infinity as α → 1. We say that a zero-set Z satisfies the slope condition if there is no contiguous horizontal or vertical interval of k sites in ∂ o Z. Let a 0 and b 0 be the longest row and column lengths of Z.
We claim that for any Z there exists a zero-set Z ⊇ Z a 0 /k + b 0 /k that satisfies the slope condition. To see why this holds, assume there is a leftmost horizontal interval of k sites in ∂ o Z, ending at site (u 0 , v 0 ). Replace Z by the zero set obtained by moving down points on the line R v (u 0 , v 0 ) and to its right, that is, by Observe that, first, the resulting set includes Z ↓1 ; second, if ∂ o Z does not have a contiguous vertical interval of k sites, this operation does not produce one; and, third, after at most a 0 /k iterations we obtain a zero-set whose boundary has no contiguous horizontal interval of k sites. Thus we can produce a zero-set that satisfies the slope condition after at most a 0 /k steps for horizontal intervals, followed by at most b 0 /k steps for vertical ones, which proves the claim. The resulting Z satisfies Assume that A spans for Z, therefore also for Z , and that |A| ≤ |Z |. If |π x (A)| ≤ (1−δ)|A|, then (7.3) ρ(α, 0, A) ≥ δ|A| ≥ δγ(Z) ≥ 1 4 δ|Z|.
We now concentrate on the case when |π x (A)| ≥ (1 − δ)|A|. Define the narrow region of Z 2 + to be the union of vertical lines that contain exactly one point of A, and the wide region to be the union of vertical lines that contain at least two points of A. Let A narrow be the subset of A that lies in the narrow region, and A wide be the remaining points of A. We claim that |A wide | ≤ 2δ|A|. To see this, observe that 2|π and then We will successively paint whole lines of Z 2 + , including points in A, red and blue, transforming the zero-set Z in the process. The resulting (finitely many) zero-sets Z i , i = 0, 1, . . ., will satisfy the slope condition, and will span with initial set A from which the points painted by that time have been removed. The painted points will dominate the set of points that become occupied in a slowed-down version of neighborhood growth with zero-set Z . Initially, no point is painted and we let Z 0 = Z , with a 0 and b 0 its largest row and column counts.
Assume that i ≥ 0 and we have a zero-set Z i , with a i its largest row count. If a i < a 0 , the procedure stops with this final i. Otherwise, choose an unpainted point x / ∈ A that gets occupied by the growth given by Z i , applied to A without the painted points. The first possibility is that at least (1 − )a i unpainted points of A are on L h (x). Then paint blue all points on L h (x) that have not yet been painted, and let Z i+1 = Z ↓1 i . The second possibility is that fewer than (1 − )a i unpainted points of A are on L h (x). Then x is in the wide region and there must be at least 1 2 a i /k ≥ 1 2 2 a 0 /k points of A on L v (x), due to the slope condition. Paint all unpainted points in the entire neighborhood of x red, and let If is the number of times the red points are added, then Observe that |Z | ≤ (a 0 + b 0 ). Moreover, the number of points in Z in rows of length at most a 0 is at most k( a 0 ) 2 , by the slope condition. Therefore, the points of A colored blue at the final step have cardinality at least Clearly, (7.4) holds if |A| ≥ |Z | as well. Therefore, (7.2) and (7.4) imply We now choose k = 1/ √ . Moreover, we observe that there exists a constant C > 1 that depends on the limiting shape Z such that (a 0 + b 0 ) 2 ≤ C|Z| for all sufficiently large n. (It is here we use the assumption that Z is bounded, so a 0 /n and b 0 /n converge.) Therefore, when |π x (A)| ≥ (1 − δ)|A|, (7.5) implies (7.6) ρ Then (7.3) and (7.6) together imply Finally, we pick = 2(1 − α) 1/3 to get from (7.7) that which implies (7.1).

The (0, 0) corner for the L-shapes
As the lower bound (1.6) can be attained, we know that inf Z I(α, β, Z)/γ( Z) is a piecewise linear function that is nonzero on [0, 1) 2 . It is natural to inquire to what extent the upper bound (1.7) on sup Z I(α, β, Z)/γ( Z) can be improved. One might ask, for example, for a piecewise linear bound which is, unlike (1.7), strictly less than 1 on (0, 1] 2 . We will now demonstrate by an example that such an improvement is impossible. Our example is the limit of L-shaped zero-sets consisting of (2a − 1) symmetrically placed n × n squares. For simplicity, we will assume that a ≥ 3 is an integer. (A variation of the argument can be made for any real number a > 2.) We will only consider the diagonal α = β, which suffices for the purposes discussed above.
This will show that γ( Z) = a and prove the desired bounds.
To prove the upper bound, we build a spanning set A by a suitable placement of a patterns. Of these, a − 2 are full n × n squares, one consist of n diagonally adjacent 1 × n intervals, and the final one consist of n diagonally adjacent n × 1 intervals. To obtain A, place these a patterns so that any horizontal or vertical line intersects at most one of them. It is easy to check that A spans. Now any B ⊆ A has which proves the upper bound in (7.9).
To prove the lower bound, assume that A is any set that spans for Z. By Lemma 2.5, we may replace A with another set, that we still denote by A, that spans for Z k and whose every point has k other points in A on some line of its neighborhood. We assume that 1 k n.
Fix an > 0, to be chosen later to be dependent on α. Assume first that |A| > (1 + ) · an 2 . Then, by Lemma 2.6, Now assume that |A| ≤ (1 + ) · an 2 . Fix numbers s ≥ n and r > 0, to be chosen later. If there exist r horizontal lines, each with at least s sites of A on it, then r(s − n + k) sites of A are wasted for the R n−k,an−k line growth, with γ(R n−k,an−k ) = (n − k)(an − k), so It follows that, if we assume (7.11) r(s − n) − (a + 1)nk ≥ · an 2 , then at most r horizontal lines and at most r vertical lines contain s or more sites of A. Now, A is a spanning set for both line growths with zero-sets R an−k,n−k and R n−k,an−k . Using the slowed-down version of line growth in which a single line is occupied each time step, we see that there exist some an − k − s vertical lines, and some an − k − s horizontal lines, each with at least n − k − r sites of A. Let A 1 and A 2 be the respective sets formed by occupied points on these vertical lines and horizontal lines and and so (7.12) |A dense | ≥ (a − 2)n 2 − 2(s − n)n − 2(ar + (a + 1)k)n − · an 2 .
We now need a variant of the argument in the proof of Lemma 2.6 for an upper bound on the entropy of A. Let A h be the set of points of A that are not in A dense but lie on a horizontal line of a point in A dense . Let A h be the set of points of A that are not in A dense ∪ A h but lie on a horizontal line with at least k other points of A (and therefore with at least k other points of A h ). Let A v be the set of points that are not in A dense ∪ A h ∪ A h but lie on a vertical line of a point in this union. Let A v = A \ (A dense ∪ A h ∪ A h ), so that any points of A v shares a vertical line with at least k other points of A v . Then and so (7.13) By (7.13), the fact that γ(Z k ) ≥ (an − k)(n − k) (which follows from Proposition 2.10), and (7.12) (7.14) ρ(A) ≥ |A| − α(|π x (A)| + |π y (A)|) To guarantee (7.11) for large n, we choose s − n = a √ n and r = 3 2 √ n. We know that for any spanning set A, either (7.10) or (7.14) holds, so that lim inf 1 To assure that the second quantity inside the min is the smaller one, we need that which is assured for all α ∈ (0, 1) with = a−2 a α. This finally gives (7.15) lim inf 1 ending the proof of the lower bound in (7.9).

The (1, 1) corner
The upper bound (1.7) provides a lower bound of −2 for the slope of sup Z I(α, α, Z)/γ( Z) at α = 1−. Continuing with the theme from the previous section, we show that this bound cannot be improved either. To achieve this, we again show that the L-shapes asymptotically attain this bound, a fact that easily follows from our next theorem.
We note that for Z as in the above theorem, γ( Z) = a, and therefore the L-shape with a = 2 provides another case (apart from the line and bootstrap growths) for which the lower bound (1.6) is attained on the entire diagonal α = β.
The proof of Theorem 7.5 proceeds in two main steps. In the first step, which holds for general Z, we show that in the relevant circumstances an arbitrary spanning set A can be replaced by a thin spanning set of a similar size, and use this to prove (1.9). The second step is a lower bound on γ thin (Z) for the L-shaped zero-sets Z.
Lemma 7.6. Fix a δ ∈ (0, 1) and a positive integer k. Let A be a set that satisfies both |π x (A)| + |π y (A)| ≥ (1 − δ)|A| and A = A >k . Then there exists a thin set A ⊆ A such that Proof. Partition A into three disjoint sets A h , A v , and A 0 as in the proof of Lemma 2.6. Points in A h lie in a row with at least k other points of A h , points in A v lie in a column with at least k other points of A v , and points of A 0 lie in a column with at least k other points of A.
Choose any point in A that shares both a row and a column with other points in A, then remove it. Repeat until no point can be removed. Let A be the so obtained final set. Observe that A is thin and that, as the removed points do not affect either projection, Moreover, Combining (7.16) and (7.17) gives 1 − δ − 2 k |A| ≤ |A | and hence |A \ A | ≤ δ + 2 k |A|. Lemma 7.7. Assume δ, k and A satisfy conditions in Lemma 7.6, and suppose in addition that A spans for some zero-set Z. Then there exists a thin set B that spans for Z, such that Proof. Let A ⊆ A be the thin set guaranteed by Lemma 7.6. Let B r be a set with the same row counts as A \ A but with no two points in the same column, and let B c be a set with the same column counts as A \ A with no two points in the same row. Assuming A ⊆ R a,b , let B s = ((a, 0) + B r ) ∪ ((0, b) + B c ) . The set B = A ∪ B s is a thin set that spans (see the proof of Lemma 3.3), and satisfies |B| ≤ (1 + δ + 2 k )|A|.
Theorem 7.9. Suppose Z is a Euclidean zero-set with finite area. Then Proof. Pick discrete zero-sets Z n so that n −2 square(Z n ) → Z. Assume that A spans for Z n . Assume 1 k n throughout. The number δ ∈ (0, 1) will eventually be chosen to depend on α ∈ (0, 1).
The upper bound is a consequence of Lemma 7.8 and Theorem 1.4, and then the claimed equivalence follows from (7.18) and (1.6).
The key bound we need for the proof of Theorem 7.5 is given by the next lemma, which implies that, for an L-shaped zero-set Z, γ thin (Z) can be much larger than γ(Z). Lemma 7.10. Assume an L-shaped zero-set given by Z = R a+b,c ∪R a,c+d , for some a, b, c, d ≥ 0. Then γ thin (Z) ≥ bc + ad − b − d.
To prove Lemma 7.10, we need some definitions. Consider two line growths, the horizontal one with zero-set R a+b,c and vertical one with zero-set R a,c+d . Fix integers a, c such that a ≤ a ≤ a + b and c ≤ c ≤ c + d. We say that a set A H-spans if A spans for R a+b,c after a thin set with c rows of a sites each is added to A so that no point in it shares a row or a column with a point of A. We also say that a set A V-spans if A spans for R a,c+d after a thin set with a columns of c sites each is added to A, none of whose points share a row or column with A. We say that a set A approximately spans if it both H-spans and V-spans. Clearly, any set that spans for Z as in Theorem 7.10 also approximately spans with a = a and c = c, so the next lemma proves Lemma 7.10. Lemma 7.11. Any thin set A that approximately spans has |A| ≥ (c − 1)(a + b − a) + (a − 1)(c + d − c).
Proof. We emphasize that a and c will stay fixed throughout the proof, while a ≥ 1, b ≥ a − a, c ≥ 1, d ≥ c − c will decrease. We will proceed by induction on a + b + c + d. The claim clearly holds if either of the four equalities hold: a = 1, c = 1, a + b = a, or c + d = c, by the formula for the line growth γ (Proposition 2.9). We will from now on assume that none of these equalities hold.
Suppose A is a thin set that approximately spans for the quadruple (a, b, c, d). The argument is divided into three cases below. We will use the slowed-down version of the line growth whereby a single full line (horizontal or vertical) is occupied in a single time step, which is equivalent to the removal of that line and shrinking of the rectangular zero-set by eliminating one row or one column from it. Case 1 . There is a horizontal line L h with at least a + b points of A. Eliminate all points on L h from A to get A , and take a = a, b = b, c = c − 1, d = d. Clearly, A is thin and V-spans for R a ,c +d = R ↓1 a,c+d . To see that A H-spans for R a +b ,c = R ↓1 a+b,c , we need to check that the addition of a thin set of c − 1 rows of a sites each, added to A, actually produces a spanning set for R a+b,c in this case. Indeed, after L h is made fully occupied, at most c − 1 horizontal lines ever need to be spanned in the line-by-line slowed down version of the line growth. By the induction hypothesis, as a ≥ a.
Case 2 . There is a vertical line L v with at least c + d points of A. Using Case 1 , this case follows by symmetry.
Case 3 . There exists a horizontal line L h with a 0 ≥ a points of A, and there exists a vertical line L v with c 0 ≥ c points of A. We assume that a 0 is the smallest such number, that is, that any horizontal line with strictly fewer than a 0 points has strictly fewer than a points, and thus strictly fewer than a points. We also assume the analogous condition for c 0 . Observe that the points on L h and L v are disjoint, because A is thin and a, c ≥ 2. This is the only place where we use thinness; the necessity for disjointness is the reason that a or c cannot be 1, leading to the factors (c − 1) and (a − 1) in the statement. Now we let a = a, c = c, b = b − 1 and d = d − 1. We will remove a points from L h and c points from L v , redistributing the remaining points on these two lines to make a thin set A that approximately spans. Once we achieve that, the induction hypothesis will imply that |A| ≥ a + c + |A | ≥ a + c + (c − 1)(a + b − a) + (a − 1)(c + d − c) = (c − 1)(a + b − a) + (a − 1)(c + d − c) + 2 > (c − 1)(a + b − a) + (a − 1)(c + d − c).
It remains to demonstrate the construction and approximate spanning of A . Clearly, if we remove the points on L v from A, the resulting set A 0 H-spans for R a +b ,c = R a+b−1,c = R ←1 a+b,c , even without the redistribution of excess points from L v . Now we address the removal and redistribution of points from L h . Let B 0 be the set A 0 augmented with the set A 0 of c horizontal lines of a points, so that B 0 is a thin set that spans for R a+b−1,c . The set B 0 still contains a 0 points on L h . Consider the line-by-line slowdown of line growth R a+b−1,c , accompanied by the corresponding removal and shrinking of the zero-set (spanning of a horizontal line results in removal of that line and of the bottom row from the zero-set; likewise for vertical lines). If a 0 ≤ a, then the line L h is never used, as the lines in A 0 complete the spanning before it could be used, that is, because lines in A 0 suffice after the shrunken zero-set has a columns. Thus the points on L h may be removed from B 0 to form B 1 . Assume now a 0 > a, and recall the minimality of a 0 . When L h is spanned, the shrunken zero-set has at most a 0 columns. By minimality, only vertical lines, say, L 1 , . . . , L m , m ≤ a 0 − a ≤ a 0 − a, are spanned before the zero-set shrinks to a columns, then lines in A 0 finish the job. Place m points on the lines L 1 , . . . , L m , one point per line, so that they share no rows with any other points of B 0 , and remove all points on line L h , forming the set B 1 . Then the lines L 1 , . . . , L m become occupied as before, since the extra point formerly provided by (spanning of) the line L h has been compensated. This brings the reduced zero-set to a columns and leads to spanning. Therefore, B 1 \ A 0 is a thin set that H-spans for R a+b−1,c .
The redistribution of at most b 0 − b points from L v is obtained analogously; add those redistributed points to B 1 \ A 0 to obtain the desired set A . This justifies the induction step in this case and finishes the proof.
Proof of Theorem 7.5. Let Z n = R an ,n ∪ R n, an . Then Lemma 7.10 implies that γ thin (Z n ) ≥ 2(a − 1)n 2 + O(n). The opposite inequality follows from the fact that a thin set with an − n sites on each of n horizontal and n vertical lines spans for Z n . Therefore, γ thin (Z n ) = 2(a − 1)n 2 + O(n).

A law of large numbers for random zero-sets
Assume that n is large and that we pick at random a Young diagram of cardinality n. We consider the following two ways to make this random choice.
• Let Z n be a Young diagram of cardinality n chosen uniformly at random. We call this the Vershik sample [Ver].
• Build Z n sequentially: start with Z 0 = ∅ and, given Z k , choose Z k+1 by adding a single site to Z k chosen at random among corners, i.e., from all sites that make Z k+1 a Young diagram. We call this the corner growth or Rost sample [Rom].
See [Rom] for a review of the fascinating research into properties of the many possible random choices of a Young diagram. The key property of these selections are the corresponding asymptotic shapes. Let Z Vershik = {(x, y) ∈ R 2 : exp − π √ 6 x + exp − π √ 6 y ≥ 1} and Z Rost = {(x, y) ∈ R 2 : √ x + √ y ≤ 6 1/4 }.
We now state the shape theorem. See [Rom] and [Pet] for concise proofs.
For any > 0 and R > 0, the Vershik sample Z n satisfies as n → ∞.
As a consequence, we obtain the following law of large numbers. where Z is the corresponding limit shape, and the convergence is in probability.
Proof. This follows from Theorems 1.4 and 8.1.

Final remarks and open problems
8. Can existence of large deviation rates be proved for bootstrap percolation [GHPS] or for line growth [BBLN] in three dimensions? A result in this direction is proved in [BBLN], where it is also pointed out that it is not at all clear that the completion time result holds in higher dimensions.
9. What is the algorithmic complexity for computation of γ(Z), when Z is given as input?