1 Introduction

1.1 Electric Fields and Charge Fluctuations

Denote by \(\Lambda \) a stationary random point process in the complex plane \({{\mathbb {C}}}\) with intensity \(c_\Lambda \) (i.e. the mean number of points of \(\Lambda \) per unit area). If we think of \(\Lambda \) as a random distribution of identical point charges, each sample generates an electric field which we identify with a solution \(V_\Lambda \) to the equation

$$\begin{aligned} {\bar{\partial }} \, V_\Lambda = \pi \sum _{\lambda \in \Lambda }\delta _\lambda -\pi c_\Lambda \hspace{1pt} m, \end{aligned}$$

where \({\bar{\partial }}=\frac{1}{2}(\partial _x+\textrm{i} \, \partial _y)\) and where m is the Lebesgue measure. In [21] (henceforth referred to as Part I), we found that, for processes with a spectral measure \(\rho _\Lambda \), the spectral condition

$$\begin{aligned} \int _{|\xi |\le 1}\frac{\textrm{d}\rho _\Lambda (\xi )}{|\xi |^2}<\infty \end{aligned}$$
(1.1)

characterizes those processes \(\Lambda \) for which there exists a stationary electric field \(V_\Lambda \) with a well-defined second-order (covariance) structure. When it exists, the field is given by

$$\begin{aligned} V_\Lambda (z)=\lim _{R\rightarrow \infty }\sum _{|\lambda |<R}\frac{1}{z-\lambda }-\pi c_\Lambda \hspace{1pt}{\overline{z}}, \end{aligned}$$
(1.2)

and furthermore, it is essentially unique. In addition, the covariance structure of \(V_\Lambda \) is conveniently expressed in terms of the covariance structure of \(\Lambda \). The field \(V_\Lambda \) can be seen as a random analogue of the Weierstrass zeta function from the theory of elliptic functions.

Denote by \(n_\Lambda ({\mathcal {G}})=\#(\Lambda \cap {\mathcal {G}})\) the counting measure (or “charge”) of a Borel set \({\mathcal {G}}\). The asymptotic variance \(\textsf{Var}[n_\Lambda (R{\mathcal {G}})]\) of the charge fluctuations as \(R\rightarrow \infty \) is of central interest in statistical physics. For the Poisson process, the size of the charge fluctuations in a disk is proportional to the area, while for more negatively correlated point processes the fluctuations tend to be suppressed and grow like \(o(R^2)\). Following Torquato–Stillinger, such processes are called hyperuniform or super-homogeneous; see [23]. Classical examples of hyperuniform point processes include the Ginibre ensemble [8] and the zero set of the Gaussian Entire Function (GEF) [3]. For both these examples, the variance of \(n_\Lambda (R{{\mathbb {D}}})\) grows like the perimeter R. It is curious to ask to what extent the geometry of the disk is important, and in particular, what role is played by boundary regularity.

Since the divergence theorem expresses the charge in a domain as the electric flux through its boundary, it seems natural to ask for the behavior of the fluctuations of the flux of the field \(V_\Lambda \) through more general rectifiable curves, which need not be closed nor simple. Specifically, given a rectifiable curve \(\Gamma \) with unit normal N, we would like to describe the asymptotic variance of

$$\begin{aligned} {\mathcal {F}}_\Lambda (\Gamma )=\int _{R\Gamma }V_\Lambda (z)\cdot N\,|\textrm{d}z| \end{aligned}$$

for large R, provided that \(\Lambda \) satisfies (1.1). To isolate the effect of the geometry of \(\Gamma \), we will make the assumption that, in addition to being stationary, the law of the point process \(\Lambda \) is invariant under rotations. We refer to such a process simply as “invariant”.

1.2 The Electric Action

For an invariant point process \(\Lambda \) subject to the spectral condition (1.1), we introduce the electric action

$$\begin{aligned} {\mathcal {E}}_\Lambda (\Gamma )= \int _{\Gamma }V_\Lambda (z) \, \textrm{d}z \end{aligned}$$

where \({\textrm{d}z}\) denotes the usual holomorphic 1-differential and where \(\Gamma \) is an arbitrary rectifiable curve in the plane. When \(\Gamma \) bounds a Jordan domain \({\mathcal {G}}\), the action coincides with the flux, which in turn equals \(2\textrm{i}\) times \(n_\Lambda ({\mathcal {G}})-c_\Lambda m({\mathcal {G}})\). In general, if T and N denote the unit tangent and normal vectors to \(\Gamma \), respectively, we obtain a decomposition

$$\begin{aligned} {\mathcal {E}}_\Lambda (\Gamma )=\int _{\Gamma } \Big (V_\Lambda \cdot T + \textrm{i} \, V_\Lambda \cdot N\Big ) \, |\textrm{d}z| \overset{\textsf{def}}{=}{\mathcal {W}}_\Lambda (\Gamma )+\textrm{i} \, {\mathcal {F}}_\Lambda (\Gamma ) \end{aligned}$$

of \({\mathcal {E}}_\Lambda (\Gamma )\) in terms of the flux \({\mathcal {F}}_\Lambda (\Gamma )\) and the work \({\mathcal {W}}_\Lambda (\Gamma )\) of the field along \(\Gamma \). While we are mainly interested in flux, the action appears to be more natural from an analytical point of view. Note that the work \({\mathcal {W}}_\Lambda (\Gamma )\) vanishes when the curve is closed. It appears that in the general situation, the fluctuations of \({\mathcal {W}}_\Lambda (R\Gamma )\) are negligible compared with those of the flux \({\mathcal {F}}_\Lambda (R\Gamma )\) (see Theorem 1.6 below).

1.3 Main Results

We assume throughout that \(\Lambda \) has a finite second moment, that is, that \({\mathbb {E}}\left[ n_\Lambda (B)^2\right] <\infty \) for any bounded Borel set B. Under this assumption, there exists a measure \(\kappa _\Lambda \) (the reduced covariance measure) with the property that

$$\begin{aligned} \textsf{Cov}\left[ n_\Lambda (\varphi ),n_\Lambda (\psi )\right] = \iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}\varphi (z)\, \overline{\psi (z')}\, \textrm{d}\kappa _\Lambda (z-z^\prime ) \, \textrm{d}m(z) \end{aligned}$$

for any test functions \(\varphi ,\psi \in C^\infty _0({{\mathbb {C}}})\), where \(n_\Lambda (\varphi )\) is the standard linear statistic given by \(n_\Lambda (\varphi )=\sum _{\lambda \in \Lambda }\varphi (\lambda )\). The spectral measure \(\rho _\Lambda \) is the Fourier transform of \(\kappa _\Lambda \) (understood in the sense of distributions).

We will also assume that the reduced truncated two-point measure \(\tau _\Lambda \overset{\textsf{def}}{=}\kappa _\Lambda -c_\Lambda \delta _0\) has a density \(k_\Lambda \) with respect to planar Lebesgue measure (the truncated two-point function). Here, \(c_\Lambda \) is the intensity of \(\Lambda \), and \(\delta _0\) is the unit point mass at the origin.

We recall also the standing assumption that the stationary point process \(\Lambda \) is invariant, i.e., that the law of \(\Lambda \) is invariant under all planar isometries. Under this assumption, the two-point function is automatically radially symmetric, so that

$$\begin{aligned} \kappa _\Lambda (z)=k_\Lambda (|z|)\, \textrm{d}m(z)+c_\Lambda \delta _0(z). \end{aligned}$$

Our results will require that \((1+t^2)k_\Lambda (t)\in L^1({{\mathbb {R}}}_{\ge 0},\textrm{d}t)\). Then, by [Part I, Remark 5.3], the spectral measure \(\rho _\Lambda \) has a radial density \(h_\Lambda \) which is \(C^1\)-smooth. Furthermore, if \(\kappa _\Lambda ({{\mathbb {C}}})=0\) (i.e., \(\displaystyle \int _{{{\mathbb {C}}}} k_\Lambda (|z|) \, \textrm{d} m(z) = - c_\Lambda \)), we have the identity

$$\begin{aligned} \int _0^\infty k_\Lambda (t) \, t^2 \, \textrm{d}t= -\frac{1}{4\pi ^2}\int _{0}^\infty \frac{h_\Lambda (\tau )}{\tau ^2} \, \textrm{d}\tau , \end{aligned}$$
(1.3)

see Remark 5.1 below. Hence, the above moment assumption implies that \(\rho _\Lambda \) satisfies the spectral condition (1.1).

By a curve we mean a continuous map \(\Gamma :I\rightarrow {{\mathbb {C}}}\) of an interval I into the plane. We identify two maps if they differ by pre-composition with an order-preserving homeomorphism of two intervals. We stress that \(\Gamma \) need not be injective; if this is required we call the curve simple. The curve \(\Gamma \) is said to be rectifiable if it has finite length

$$\begin{aligned} |\Gamma |=\sup _{t_0<t_1<\cdots<t_n}\sum _{j=1}^{n}\big |\Gamma (t_{j})-\Gamma (t_{j-1})\big | =\int _I|\gamma '(t)|\,\textrm{d}t<\infty , \end{aligned}$$

where the supremum is taken over all partitions \(a\le t_0<t_1<\cdots <t_n\le b\) of \(I=[a,b]\), and where, without loss of generality, it is tacitly assumed that \(\gamma \) is a Lipschitz parametrization of \(\Gamma \), so that \(\gamma '\) exists a.e.. When no confusion should arise, we will abuse terminology somewhat and identify a curve with its image. When referring to a particular parametrization, we will use lowercase Greek letters, and write e.g. \(\gamma :I\rightarrow \Gamma \).

The following regularity notion plays a key role in our analysis.

Definition 1.1

(Weak Ahlfors regularity) We say that a rectifiable curve \(\Gamma \) is weakly Ahlfors regular if there exists a constant \(C_\Gamma <\infty \) such that

$$\begin{aligned} \left| \int _{\Gamma \cap D}\textrm{d}z\right| \le C_\Gamma |\partial D|, \end{aligned}$$

for any Euclidean disk \(D\subset {{\mathbb {C}}}\).

The terminology is borrowed from the classical Ahlfors regularity condition, which asks that

$$\begin{aligned} \int _{\Gamma \cap D}|\textrm{d}z|\le C_\Gamma |\partial D|. \end{aligned}$$

While rectifiable Jordan curves may fail to be Ahlfors regular, it turns out that they are always weakly Ahlfors; see Lemma 3.1. Informally speaking, the weak Ahlfors condition is meant to prevent excessive spiralling near the end points of \(\Gamma \), which appears to be the main obstruction to linear growth of charge fluctuations.

Definition 1.2

(Signed length) The signed length of the intersection of two rectifiable curves \(\Gamma _1\) and \(\Gamma _2\) is given by

$$\begin{aligned} {\mathcal {L}}(\Gamma _1,\Gamma _2) =\int _{{{\mathbb {C}}}}\bigg (\sum _{s\in \gamma _1^{-1}(z),\; t\in \gamma _2^{-1}(z)} \gamma _1^\prime (s) \cdot \gamma _2^\prime (t)\bigg )\,\textrm{d}{\mathcal {H}}^1(z), \end{aligned}$$
(1.4)

where \(\gamma _1\) and \(\gamma _2\) denote the arc-length parametrizations of the two curves, i.e. parametrizations with \(|\gamma '_j|=1\) a.e., and where \({\mathcal {H}}^1\) is one-dimensional Hausdorff measure on \({{\mathbb {C}}}\).

The quantity (1.4), which was introduced in [3], measures the length of the intersection \(\Gamma _1\cap \Gamma _2\) taking orientation and multiplicity into account; see Fig. 1. The signed length \({\mathcal {L}}(\Gamma _1,\Gamma _2)\) is finite for any two weakly Ahlfors regular curves; see Remark 3.3.

Fig. 1
figure 1

Left: The purple arcs \(J_1\subset \Gamma \) are traversed twice and the green arc \(J_2\) is traversed three times; these contribute \(4{\mathcal {H}}^1(J_1)\) and \(9{\mathcal {H}}^1(J_2)\), respectively, to the signed length \({\mathcal {L}}(\Gamma ,\Gamma )\). Right: For the signed length \({\mathcal {L}}(\Gamma _1,\Gamma _2)\), the arc \(K_1\) (orange) is counted with a positive sign, while \(K_2\) (purple) is counted with negative sign (Color figure online)

Theorem 1.3

Denote by \(\Lambda \) an invariant point process and let \({\mathcal {E}}_{\Lambda }(\Gamma ) = \int _{\Gamma } V_\Lambda (z) \, \textrm{d}z\). Assume that the two-point function \(k_\Lambda \) of \(\Lambda \) satisfies \((1+t^2)k_\Lambda (t)\in L^1({{\mathbb {R}}}_{\ge 0},\textrm{d}t)\) together with the zeroth moment condition

$$\begin{aligned} \int _0^\infty k_\Lambda (t)t\,\textrm{d}t=-c_\Lambda . \end{aligned}$$
(1.5)

Then, for any weakly Ahlfors regular rectifiable curves \(\Gamma _1\) and \(\Gamma _2\) we have that

$$\begin{aligned} \textsf{Cov}\big [{\mathcal {E}}_\Lambda (R\, \Gamma _1) \,, {\mathcal {E}}_\Lambda (R \, \Gamma _2)\big ] =R\left( C_\Lambda + o(1)\right) {\mathcal {L}}(\Gamma _1,\Gamma _2) \end{aligned}$$

as \(R\rightarrow \infty \), where \(\displaystyle {C_\Lambda =-8\pi ^{2}\int _0^\infty k_\Lambda (t)\,t^2 \, \textrm{d}t}\).

Note that the identity (1.3) makes it clear that \(C_\Lambda \) is positive. Since Jordan curves are always weakly Ahlfors regular (see Lemma 3.1), we get the following result.

Theorem 1.4

Assume that \(\Lambda \) satisfies the conditions of Theorem 1.3. Then for any Jordan domain \({\mathcal {G}}\) with rectifiable boundary, we have the asymptotics

$$\begin{aligned} \textsf{Var}\left[ n_\Lambda (R{\mathcal {G}})\right] = \tfrac{1}{4}R\left( C_\Lambda +o(1)\right) |\partial {\mathcal {G}}| \end{aligned}$$
(1.6)

as \(R\rightarrow \infty \).

It should be mentioned that the fact that the zeroth-moment condition (1.5) yields the suppressed charge fluctuations (1.6) for domains \({\mathcal {G}}\) with smooth boundaries was known a long time ago, see [14, Sect. 1.6]

Remark 1.5

Our proof of Theorem 1.3 yields that the weak Ahlfors condition can be relaxed to the maximal function criterion for the pair of rectifiable curves \((\Gamma _1,\Gamma _2)\):

$$\begin{aligned} \sup _{0<\varepsilon \le 1}\varepsilon ^{-1}\int _0^\varepsilon \frac{1}{t} \left| \int _{\Gamma _1\cap {{\mathbb {D}}}(\zeta ,t)}{\textrm{d}z}\right| \textrm{d}t \in L^1(\Gamma _2,|\textrm{d}\zeta |) \end{aligned}$$

without essential modifications. We have chosen to work with the stronger condition in Definition 1.1 to make the proofs as transparent as possible, but by doing so we miss out on a few examples where the maximal function condition would be needed. For instance, this seems to be the case for the “nested squares” curve \(\Gamma _1=\Gamma _2=\bigcup _{k\ge 0}2^{-k}\partial Q\) where \(Q=[0,1]\times [0,1]\).

There ought to exist counterexamples to the asymptotics of Theorem 1.3 if we do not impose anything like weak regularity. At least this is the case for the Ginibre ensemble, for which we show that for any \(\varepsilon >0\), there exists a rectifiable Jordan arc \(\Gamma _\varepsilon \) for which

$$\begin{aligned} \textsf{Var}\big [{\mathcal {E}}_\Lambda (R\Gamma _\varepsilon )\big ]\gtrsim R^{2-\varepsilon }|\Gamma | \end{aligned}$$

holds as \(R\rightarrow \infty \); see Sects. 6.16.2.

Our last result concerns the asymptotics of the work \({\mathcal {W}}_\Lambda (\Gamma )\), i.e. the real part of the complex action \({\mathcal {E}}_\Lambda (\Gamma )\) (defined in Sect. 1.2).

Theorem 1.6

Assume that the two-point function \(k_\Lambda \) of the invariant point process \(\Lambda \) satisfies \((1+t^3)k_\Lambda (t)\in L^1({{\mathbb {R}}}_{\ge 0}, \textrm{d}t)\) along with the zeroth moment condition (1.5), and that \(\Gamma \) is a weakly Ahlfors regular rectifiable curve with distinct start and end points. Then, as \(R\rightarrow \infty \),

$$\begin{aligned} \textsf{Var}\big [{\mathcal {W}}_\Lambda (R\Gamma )\big ] =\left( D_\Lambda +o(1)\right) \log R \end{aligned}$$

where \(\displaystyle {D_\Lambda =2\pi ^2 \int _0^\infty t^3 k_\Lambda (t)\textrm{d}t}\). As a consequence, we have that, as \(R\rightarrow \infty \),

$$\begin{aligned} \textsf{Var}\big [{\mathcal {E}}_\Lambda (R\Gamma )\big ] =\textsf{Var}\big [{\mathcal {F}}_\Lambda (R\Gamma )\big ] + O(\log R) \,. \end{aligned}$$

In other words, under these assumptions the quantity \({\mathcal {E}}_\Lambda (R\Gamma )\) really measures the asymptotic electric flux through \(R\Gamma \).

Remark 1.7

Denoting by \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) the probability space on which the stationary point process \(\Lambda \) is defined, let \({\mathcal {F}}_\textsf{inv} \subset {\mathcal {F}}\) be the sigma-algebra of translation invariant events. Throughout this paper, we make the simplifying assumption that the spectral measure \(\rho _\Lambda \) does not have an atom at the origin. As outlined in [Part I, Sect. 2.3], this is the same as saying that the conditional intensity of \(\Lambda \), defined by

$$\begin{aligned} {\mathfrak {c}}_\Lambda = \pi ^{-1} {\mathbb {E}}\big [n_\Lambda ({{\mathbb {D}}}) \mid {\mathcal {F}}_{\textsf{inv}}\big ] , \end{aligned}$$

is non-random and equals \(c_\Lambda \) almost surely. While this assumption simplifies the formulation of our results, our methods can be adapted to deal also with the case when \(\rho _\Lambda (\{0\})>0\). We will not dwell on the details, but mention only that in this setting, the stationary vector field should be defined as in Part I by

$$\begin{aligned} V_{\Lambda }(z) = \lim _{R\rightarrow \infty }\sum _{|\lambda |<R}\frac{1}{z-\lambda } -\pi {\mathfrak {c}}_\Lambda {\overline{z}} , \end{aligned}$$

and, in Theorem 1.4, one should replace the variance \(\textsf{Var}[n_\Lambda (R\,{\mathcal {G}})]\) with the re-centered variance \(\textsf{Var}\big [n_\Lambda ({\mathcal {G}})-{\mathfrak {c}}_\Lambda m({\mathcal {G}})\big ]\). Since the spectral measure \(\rho _{V_\Lambda }\) and two-point function \(K_{V_\Lambda }\) do not depend on the size of the atom \(\rho _\Lambda (\{0\})\) ([Part I, Theorem 5.8]), only very minor modifications will be needed.

It is natural to ask about asymptotic normality of the renormalized electric flux \(R^{-1/2}{\mathcal {F}}_\Lambda (R\Gamma )\) (or, equivalently, of the renormalized electric action). For the zeroes of GEFs and \(C^1\)-smooth curves, this was proven in [3] using the method of moments. An alternative is to use the clustering property of k-point functions (cf. Malyshev [13], Martin-Yalcin [15], Nazarov-Sodin [18, Theorem 1.5]), which is easy to verify for the Ginibre ensemble using its determinantal structure, and which was proven in [18] for the zero set of GEFs. We will not pursue the details here.

1.4 Related Work

The study of charge fluctuations of stationary point processes is a classical topic in mathematical physics; see [6] for a recent survey. Important early contributions were the works of Martin-Yalcin [15], Lebowitz [11], and Jancovici-Lebowitz-Manificat [9] (see also Martin’s survey [14]). Recently there has been a resurgence of interest, due partly to the role of hyperuniform systems in material science (see e.g. [22]). This has highlighted the relationship between charge fluctuations and properties of the spectral measure; see the work [1] of Adhikari-Ghosh-Lebowitz and references therein for recent mathematical developments.

If we do not impose any smoothness condition on the spectral measure, there exist examples where the asymptotics (1.6) cease to hold. To see this, simply consider the stationary point process obtained by randomly shifting the lattice points \({\mathbb {Z}}^2\subset {{\mathbb {R}}}^2 = {{\mathbb {C}}}\) by a random variable uniformly distributed on \([0,1]^2\). In this example, the spectral measure consists of unit point masses on \({\mathbb {Z}}^2\setminus \{0\}\). Denoting by \({\mathbb {B}}^p\subset {{\mathbb {R}}}^2\) the unit ball in the \(\ell ^p\)-metric, Kim and Torquato [10] showed that \(\textsf{Var}\big [n_\Lambda (R \, {\mathbb {B}}^p) \big ]\) grows like \(R^{\sigma (p)}\), where \(\sigma (p)\) varies continuously between 1 and 2 as p ranges from 2 to infinity. It is also worth mentioning that examples of this sort persists when considering independent Gaussian perturbations of the “randomly shifted" lattice, where the spectral measure is a mixture of an absolutely continuous part and a singular part, see Yakir [24]. Another related work is the recent study by Björklund and Hartnick [2], which investigated the hyperuniformity of various point processes which are models for quasicrystals. In this case the asymptotic of the number variance may depend on fine arithmetic properties of the quasicrystal.

For the zeros of the Gaussian Entire Function F(z), the flux \({\mathcal {F}}_\Lambda (\Gamma )\) corresponds to the change in argument of F(z) along \(R\Gamma \). This was introduced by Buckley-Sodin in [3], though the idea was present already in Lebowitz work [11] on charge fluctuations in Coulomb systems. Buckley and Sodin obtained the large R-asymptotics of \(\textsf{Cov}\left[ {\mathcal {F}}_\Lambda (R\Gamma _1),{\mathcal {F}}_\Lambda (R\Gamma _2)\right] \) for piecewise \(C^1\)-regular curves \(\Gamma _1\) and \(\Gamma _2\) with a proof which used the properties of that ensemble. It is not clear whether the proof given there persists for weakly Ahlfors regular curves. We mention that the signed length recently appeared in the work of Notarnicola-Peccati-Vidotto on the length of nodal lines in Berry’s random planar wave model [19].

For determinental point processes (including the infinite Ginibre ensemble) Lin [12] very recently obtained a stronger version of Theorem 1.4, which is valid for a more general class of domains \({\mathcal {G}}\) having a bounded perimeter. He also showed that for a very large class of domains \({\mathcal {G}}\), \(\textsf{Var}\big [n_\Lambda (R{\mathcal {G}})\big ]\) is comparable to \(R^{\alpha ({\mathcal {G}})}\), where \(\alpha ({\mathcal {G}})\) is the Minkowski dimension of \(\partial \mathcal G\). Lin’s approach is based on the study of asymptotics of the functional

$$\begin{aligned}&\iint _{{{\mathbb {R}}}^2\times {{\mathbb {R}}}^2} \big ({1\hspace{-2.5pt}\textrm{l}}_{{\mathcal {G}}}(x) - {1\hspace{-2.5pt}\textrm{l}}_{{\mathcal {G}}}(y)\big )^2 \rho \big (R|x-y|\big ) \, \textrm{d}m(x)\, \textrm{d}m(y) \\&\quad =\iint _{{\mathcal {G}}\times {\mathcal {G}}^c} \rho \big (R|x-y|\big ) \, \textrm{d}m(x)\, \textrm{d}m(y) \end{aligned}$$

as \(R\rightarrow \infty \), where \(\rho \) is a non-negative integrable function which is sufficiently fast decaying at infinity (cf. Dávila [4]). In our case \(\rho =-k_\Lambda \), the (truncated) two-point function of \(\Lambda \) taken with negative sign. Non-negativity of \(\rho \) seems to be an obstacle for using this technique for non-determinental point processes. Nevertheless, Lin mentions [12, Remark 1.2.3] that his techniques can be also applied to zeros of Gaussian Entire Functions.

There is also a resemblance between the topic of this paper and the study of “irregularities of distribution” e.g. in the work of Montgomery [17, Ch. 6].

1.5 Outline of the Paper

The starting point for the asymptotic analysis of \(\textsf{Var}\big [{\mathcal {E}}_\Lambda (R\Gamma )\big ]\) is the identity

$$\begin{aligned} \textsf{Var}\big [{\mathcal {E}}_\Lambda (R\Gamma )\big ] =\mathsf{p.v.}\iint _{R\Gamma \times R\Gamma } K(|x-y|) \, \textrm{d}x \textrm{d}{\bar{y}}, \end{aligned}$$
(1.7)

where K is the (singular) covariance kernel of \(V_\Lambda \) (see Sect. 4) and where the principal value integral is understood in the sense of the limit

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\iint _{R\Gamma \times R\Gamma } {1\hspace{-2.5pt}\textrm{l}}_{\{|x-y|>\varepsilon \}} K(|x-y|)\,\textrm{d}x\textrm{d}{\bar{y}}. \end{aligned}$$

The article is organized as follows. In Sect. 2 we recall various preliminaries, mainly on the second-order structure of stationary processes. Section 3 is devoted to geometric observations concerning the weak-Ahlfors condition. In Sect. 4 we establish the formula (1.7) for the variance and recall a convenient representation of the kernel K from Part I. The proof of the main results are given in Sect. 5, and the existence of rectifiable but not weakly Ahlfors regular Jordan arcs with large charge fluctuations is discussed in Sect. 6 (this is made rigorous for the infinite Ginibre ensemble).

2 Preliminaries

2.1 Notation and Conventions

We will frequently use the following notation.

  • \({{\mathbb {C}}}\), \({{\mathbb {R}}}\), \({{\mathbb {R}}}_{\ge 0}\); the complex plane, the real line and the half line \({{\mathbb {R}}}_{\ge 0}=\{x\in {{\mathbb {R}}}:x\ge 0\}\)

  • \({{\mathbb {D}}}(x,R)\); the disk \(\{z\in {{\mathbb {C}}}:|z-x|<R\}\). We let \({{\mathbb {D}}}(0,1)={{\mathbb {D}}}\)

  • \(\partial =\partial _z\) and \({\bar{\partial }} =\partial _{{{\bar{z}}}}\); the Wirtinger derivatives

    $$\begin{aligned} \partial =\frac{1}{2}\left( \frac{\partial }{\partial x} -\textrm{i}\frac{\partial }{\partial y}\right) ,\qquad {\bar{\partial }}=\frac{1}{2}\left( \frac{\partial }{\partial x} +\textrm{i}\frac{\partial }{\partial y}\right) \end{aligned}$$
  • m; the Lebesgue measure on \({{\mathbb {C}}}\)

  • \({\widehat{f}}\); the Fourier transform, with the normalization

    $$\begin{aligned} {\widehat{f}}(\xi )=\int _{{{\mathbb {C}}}}\,e^{-2\pi \textrm{i}x\cdot \xi }f(x)\,{\textrm{d}}m(x) \end{aligned}$$
  • \({\mathfrak {D}}\), \({\mathcal {S}}\); the class of compactly supported \(C^\infty \)-smooth functions and the class of Schwartz functions, respectively

  • \({\mathbb {E}}\), \(\textsf{Cov}\), \(\textsf{Var}\); the expectation, covariance and variance with respect to the probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) on which \(\Lambda \) is defined

  • K; the two-point function for the stationary field \(V_\Lambda \); see Lemma 4.1.

  • \(\rho _\Lambda \), \(\kappa _\Lambda \), \(\tau _\Lambda \); spectral measure, and the reduced and reduced truncated covariance measures for \(\Lambda \); see Sect. 2.2.

  • \(c_\Lambda \); the intensity of \(\Lambda \)

  • \(\tau (x)=\tau _\Gamma (x)\); the “net tangent” \(\tau (x)\overset{\textsf{def}}{=}\sum _{t\in \gamma ^{-1}(x)}\gamma '(t)\) for an oriented rectifiable curve \(\Gamma \); see (3.3)

  • \(n_\Lambda \); the random counting measure \(n_\Lambda =\sum _{\lambda \in \Lambda }\delta _\lambda \)

  • \(\mu (f)=(f,\mu )\); the distributional action of the measure \(\mu \); i.e. \(\mu (f)=\int f\,{\textrm{d}\mu }\)

  • \(\nu _\Gamma \); the current which acts on one-forms \(\omega \) by \(\nu _\Gamma (\omega )= \int _{\Gamma }{\textrm{d}}\omega \). By a slight abuse of notation, we will write \(\nu _\Gamma (f)\) to mean \(\nu _\Gamma (f \, \textrm{d}z)\). We will identify \(\nu _\Gamma \) with a complex-valued finite measure by setting \(\nu _\Gamma (B)=\int _{\Gamma \cap B}\textrm{d}z\) for Borel sets \(B\subset {{\mathbb {C}}}\).

  • \(\phi _\varepsilon \); the Gaussian \(\phi _\varepsilon (z) =\frac{1}{\pi \varepsilon ^2}e^{-|z|^2/\varepsilon ^2}\).

We use the standard O-notation and the notation \(\lesssim \) with interchangeable meaning. If a limiting procedure involves an auxiliary parameter a, we write e.g. \(f_a(x)=O_a(g(x))\) to indicate the dependence of the implicit constant on the parameter.

2.2 The Covariance Structure of \(\Lambda \)

For a discussion of the spectral measure and various covariance measures of the point process \(\Lambda \), we refer to Sect. 2 of Part I. For the reader’s convenience, we recall the most central notions here.

We assume throughout that the point process has a finite second moment, i.e. that \({\mathbb {E}}\big [n_\Lambda (B)^2\big ]<\infty \) holds for any bounded Borel set B. Under this assumption there exists a measure \(\kappa _\Lambda \) such that

$$\begin{aligned} \textsf{Cov}\big [n_\Lambda (\varphi ),n_\Lambda (\psi )\big ] =\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}\varphi (z)\overline{\psi (z')} \, \textrm{d}\kappa _\Lambda (z-z')\textrm{d}m(z). \end{aligned}$$

This is the so-called reduced covariance measure. It is often more convenient to write \(\kappa _\Lambda =\tau _\Lambda +c_\Lambda \delta _0\). Indeed, for the standard point processes that we have in mind (the Ginibre ensemble, and the zero set of the GEF) the measure \(\tau _\Lambda \) has a density \(k_\Lambda \) with respect to planar Lebesgue measure, called the (truncated) two-point function. For the Ginibre ensemble, the two-point function takes the particularly simple form \(k_\Lambda (t)=-\pi ^{-2}e^{-\pi \,t^2}\), while for the zeros of the GEF it is given by

$$\begin{aligned} k_\Lambda (t)=\frac{1}{2}\, \frac{\textrm{d}^2}{\textrm{d}t^2}\, t^2 (\coth t - 1). \end{aligned}$$

The spectral measure \(\rho _\Lambda \) is the Fourier transform of \(\kappa _\Lambda \), and it satisfies the Plancherel-type identity

$$\begin{aligned} \textsf{Cov}\left[ n_\Lambda (\varphi ), n_\Lambda (\psi )\right] = \int _{{{\mathbb {C}}}} {\widehat{\varphi }}(\xi ) \overline{\widehat{\psi }(\xi )}\,{\textrm{d}}\rho _{\Lambda }(\xi ) = \langle \widehat{\varphi }, \widehat{\psi } \rangle _{L^2(\rho _\Lambda )}, \end{aligned}$$

for test functions \(\varphi ,\psi \in {\mathfrak {D}}\). This relation extends to \(\varphi ,\psi \in {\mathcal {S}}\); see Remark 2.1 in Part I.

Under the spectral condition (1.1), we define the stationary random field \(V_\Lambda \) by (1.2) (see also Sect. 5.1, Part I). The spectral measure \(\rho _{V_\Lambda }\) of \(V_\Lambda \) is given by

$$\begin{aligned} \textrm{d}\rho _{V_\Lambda }(\xi ) = {1\hspace{-2.5pt}\textrm{l}}_{{{\mathbb {C}}}\setminus \{0\}}(\xi )\, \frac{\textrm{d}\rho _\Lambda (\xi )}{|\xi |^2}, \end{aligned}$$

see Theorem 5.8 in Part I.

2.3 Admissible Measures

For most of the article, it will be convenient to replace the current \(\nu _\Gamma (f\textrm{d}z)=\int _\Gamma f(z){\textrm{d}z}\) with the more general observables

$$\begin{aligned} \mu (f)\overset{\textsf{def}}{=}\int f\, {\textrm{d}\mu } \end{aligned}$$

where \(\mu \) is a complex-valued measure of finite total variation. The weak Ahlfors regularity condition then corresponds to the following notion.

Definition 2.1

(Admissible complex-valued measures)

We say that a compactly supported complex-valued Borel measure \(\mu \) on \({{\mathbb {C}}}\) of finite total variation is admissible if there exists a constant \(C=C_\mu \) such that

$$\begin{aligned} |\mu ({{\mathbb {D}}}(x,r))|\le C r \end{aligned}$$

for all \(x\in {{\mathbb {C}}}\) and \(r>0\).

Denote by \(\phi _\varepsilon \) the Gaussian

$$\begin{aligned} \phi _\varepsilon (z)=\frac{1}{\pi \varepsilon ^2}e^{-|z|^2/\varepsilon ^2}. \end{aligned}$$
(2.1)

Claim 2.2

Assume that \(\mu \) is admissible with admissibility constant \(C_\mu \), and define a regularized measure by \(\mu _\varepsilon \overset{\textsf{def}}{=}\phi _\varepsilon *\mu \). Then \(\mu _\varepsilon \) is admissible with the same constant \(C_\mu \).

Proof

From the definition of \(\mu _\varepsilon \), Fubini’s theorem, and a linear change of variables we get

$$\begin{aligned} \mu _\varepsilon ({{\mathbb {D}}}(x,r))&=\int _{{{\mathbb {D}}}(x,r)}\int _{{{\mathbb {C}}}}\phi _\varepsilon (z-w) \, {\textrm{d}\mu }(w)\textrm{d}m(z)\\&=\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}{1\hspace{-2.5pt}\textrm{l}}_{{{\mathbb {D}}}(x,r)}(\zeta +w) \phi _{\varepsilon }(\zeta ) \, {\textrm{d}\mu }(w)\textrm{d}m(\zeta )\\&=\int _{{{\mathbb {C}}}}\mu \left( {{\mathbb {D}}}(x+\zeta ,r)\right) \phi _\varepsilon (\zeta ) \,\textrm{d}m(\zeta ). \end{aligned}$$

Since \(\phi _\varepsilon \ge 0\) with \(\Vert \phi _\varepsilon \Vert _{L^1({{\mathbb {C}}},\textrm{d}m)}=1\), this gives the upper bound

$$\begin{aligned} \frac{\left| \mu _\varepsilon ({{\mathbb {D}}}(x,r))\right| }{r}\le \int _{{{\mathbb {C}}}}\frac{|\mu \left( {{\mathbb {D}}}(x+\zeta ,r)\right) |}{r} \phi _\varepsilon (\zeta ) \, \textrm{d}\zeta \le C_\mu \end{aligned}$$

as claimed. \(\square \)

3 Geometric Lemmas

3.1 Rectifiable Jordan Curves are Weakly Ahlfors Regular

We supply a simple proof that any Jordan curve is weakly Ahlfors regular.

Lemma 3.1

Every rectifiable Jordan curve \(\Gamma \) is weakly Ahlfors regular. In fact, for any disk D we have the upper bound

$$\begin{aligned} \left| \int _{\Gamma \cap D}{\textrm{d}z}\right| \le |\partial D|. \end{aligned}$$
Fig. 2
figure 2

Figure pertaining to the proof that Jordan curves are weakly Ahlfors regular (Lemma 3.1). The union \({\mathcal {C}}\) of circular arcs intersecting the Jordan domain \({\mathcal {G}}\) is shown in purple, and the domains enclosed by \((\Gamma \cap D)\cup {\mathcal {C}}\) are lightly shaded

Proof

Denote by \({\mathcal {G}}\) the Jordan domain enclosed by \(\Gamma \) and fix an open disk D. We first assume that \(\Gamma \) is analytic, which implies that either \(\Gamma =\partial D\), or it has finitely many intersections with \(\partial D\). Indeed, after a translation and a rescaling, we may assume that D is the unit disk. If \(\phi :{{\mathbb {D}}}\rightarrow {\mathcal {G}}\) is a conformal map, it extends conformally past the boundary, and f is real-analytic on the circle. If \(\#(\Gamma \cap \partial D)\) were infinite, then the zeros of the real-analytic function \(f(s)\overset{\textsf{def}}{=}|\phi (e^{\textrm{i}s})|^2-1\) would have an accumulation point, which implies that \(f(s)=0\), so \(\Gamma \) coincides with a circle. Hence, \({\mathcal {G}}\cap D\) is a finite union of Jordan domains with piecewise analytic boundaries. Moreover, \({\mathcal {C}}\overset{\textsf{def}}{=}\overline{{\mathcal {G}}\cap \partial D}\) is a disjoint finite union of circular arcs. If these are given the positive orientation, the domain \({\mathcal {G}}\) always remains on the left-hand side as we traverse \({\mathcal {C}}\), and we find that the curve

$$\begin{aligned} \left( \Gamma \cap D\right) \cup {\mathcal {C}}\end{aligned}$$

is a finite union of piecewise smooth Jordan curves, each of which bounds a connected component of \({\mathcal {G}}\cap D\); see Fig. 2. Cauchy’s theorem then gives that

$$\begin{aligned} \int _{\Gamma \cap D}{\textrm{d}z}= -\int _{{\mathcal {C}}}{\textrm{d}z}, \end{aligned}$$
(3.1)

and we clearly have that

$$\begin{aligned} \left| \int _{{\mathcal {C}}}{\textrm{d}z}\right| \le \int _{{\mathcal {C}}}|\textrm{d}z|=|\partial D|. \end{aligned}$$
(3.2)

Hence, the proof is complete for analytic curves.

For a general rectifiable Jordan curve \(\Gamma \), we argue by approximation. By Carathéodory’s theorem, there exists a conformal mapping \(\phi \) of \({{\mathbb {D}}}\) onto \({\mathcal {G}}\), such that \(\phi \) extends to a homeomorphism \(\phi :\overline{{{\mathbb {D}}}}\rightarrow \overline{{\mathcal {G}}}\). We obtain the desired approximation by taking \(\Gamma _t\) to be the curve parametrized by \(\phi (t e^{\textrm{i}s})\), \(s\in [0,2\pi )\), for \(0<t<1\) and letting \(t\rightarrow 1\). Since \(\Gamma \) is rectifiable, a theorem of Riesz and Privalov (see [20, 6.8]) asserts that the derivative \(\phi '(z)\) belongs to the Hardy space \(H^1({{\mathbb {T}}})\). In particular, the radial limit \(\phi '=\lim _{t\rightarrow 1}\phi '_t\) exists a.e. on \({{\mathbb {T}}}\), and \(\phi _t'\rightarrow \phi '\) in \(L^1({{\mathbb {T}}})\). Since \(\phi (e^{\textrm{i}s})\) supplies an absolutely continuous parametrization of \(\Gamma \), the quantity of interest may be expressed as

$$\begin{aligned} \int _{\Gamma \cap D}\textrm{d}z=\int _0^{2\pi }{1\hspace{-2.5pt}\textrm{l}}_{D}\big (\phi (e^{\textrm{i}s})\big ) \partial _s\phi (e^{\textrm{i}s})\,\textrm{d}s=\int _0^{2\pi }{1\hspace{-2.5pt}\textrm{l}}_{D}\big (\phi (e^{\textrm{i}s})\big ) \textrm{i}e^{\textrm{i}s}\phi '(e^{\textrm{i}s})\,\textrm{d}s. \end{aligned}$$

Next, note that \({1\hspace{-2.5pt}\textrm{l}}_{D}\circ \phi _t\rightarrow {1\hspace{-2.5pt}\textrm{l}}_{D}\circ \phi \) holds everywhere on the circle. Indeed, since D is open, any point \(\phi (e^{\textrm{i}s})\in D\) has a neighborhood \(V\subset D\). But since \(\phi _t(e^{\textrm{i}s})\rightarrow \phi (e^{\textrm{i}s})\) for any s, we have \(\phi _t(e^{\textrm{i}s})\in V\) for t sufficiently close to 1.

As a consequence of these properties of the approximating parametrization we find that for any fixed disk D, we have

$$\begin{aligned} \lim _{t\rightarrow 1}\int _{0}^{2\pi }{1\hspace{-2.5pt}\textrm{l}}_{D}\big (\phi _t(e^{\textrm{i}s})\big ) \textrm{i}e^{\textrm{i}s}\phi _t'(e^{\textrm{i}s})\,\textrm{d}s= \int _{0}^{2\pi }{1\hspace{-2.5pt}\textrm{l}}_{D}\big (\phi (e^{\textrm{i}s})\big )\textrm{i}e^{\textrm{i}s} \phi '(e^{\textrm{i}s})\,\textrm{d}s, \end{aligned}$$

or, in other words,

$$\begin{aligned} \lim _{t\rightarrow 1 }\int _{\Gamma _t\cap D}\textrm{d}z=\int _{\Gamma \cap D}\textrm{d}z. \end{aligned}$$

Hence, for any given disk D and any fixed \(\varepsilon >0\), we let t be sufficiently close to 1 so that

$$\begin{aligned} \left| \int _{\Gamma \cap D}\textrm{d}z-\int _{\Gamma _t\cap D}\textrm{d}z\right| \le \varepsilon |\partial D|. \end{aligned}$$

But since \(\Gamma _t\) is analytic, it follows from (3.1) and (3.2) and the reverse triangle inequality that

$$\begin{aligned} \left| \int _{\Gamma \cap D}\textrm{d}z\right| \le (1+\varepsilon )|\partial D|. \end{aligned}$$

Since \(\varepsilon >0\) was arbitrary, the claim follows. \(\square \)

If a rectifiable Jordan arc \(\Gamma \) can be completed to a rectifiable Jordan curve by appending a weakly Ahlfors regular Jordan arc, then \(\Gamma \) is also weakly Ahlfors regular; cf. Fig. 3. The existence of such an arc would be guaranteed if, e.g., \(\Gamma \) does not wind too wildly near any of its endpoints. Hence, at least for rectifiable Jordan arcs, only the behavior near the end-points matter for weak regularity.

3.2 The Density of \(\nu _{\Gamma }\)

We denote by \(\gamma :I\rightarrow \Gamma \) the arc-length parametrization of \(\Gamma \), and introduce the (“net tangent”) function

$$\begin{aligned} \tau (x)=\tau _\Gamma (x)\overset{\textsf{def}}{=}\sum _{t\in \gamma ^{-1}(x)}\gamma '(t). \end{aligned}$$
(3.3)

Let \(\nu _\Gamma \) be the complex-valued measure given by the integration current over \(\Gamma \). That is,

$$\begin{aligned} \int h\,\textrm{d}\nu _{\Gamma }=\int _{\Gamma }h(z)\,\textrm{d}z=\int _{I}h(\gamma (t)) \, \gamma '(t)\,\mathrm{}\textrm{d}t. \end{aligned}$$
(3.4)

Then, for any Borel set B, \(\displaystyle \nu _\Gamma (B)=\int _{\Gamma \cap B}\textrm{d}z\).

Recall that \({\mathcal {H}}^1\) is the one-dimensional Hausdorff measure on \({{\mathbb {C}}}\).

Fig. 3
figure 3

A Jordan arc \(\Gamma \) (black) which can be completed to a Jordan curve by appending a weakly Ahlfors regular arc (red), implying that \(\Gamma \) is weakly Ahlfors; cf. Lemma 3.1 (Color figure online)

Lemma 3.2

Assume that \(\Gamma \) is a weakly Ahlfors regular rectifiable curve. Then for \({\mathcal {H}}^1\)-a.e. x, it holds that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }\int _{0}^\varepsilon \frac{\nu _{\Gamma }\left( {{\mathbb {D}}}(x,r)\right) }{2r}\textrm{d}r=\tau (x). \end{aligned}$$

Proof

From (3.4), it is evident that \(\nu _{\Gamma }\) is absolutely continuous with respect to arc-length measure on \(\Gamma \), which in turn is absolutely continuous with respect to \({\mathcal {H}}^1\big \vert _{\Gamma }\). In fact, by applying the change of variables formula in [5, Theorem 3.9] to the functions \(g=(h\circ \gamma )\,\gamma '\) and \(f=\gamma \) (so that the Jacobian satisfies \(Jf=1\) a.e.), we find that

$$\begin{aligned} \int _{I}h(\gamma (t))\gamma '(t)\, \textrm{d}t= \int _{\Gamma }h(x)\left( \sum _{t\in \gamma ^{-1}(x)} \gamma '(t)\right) \textrm{d}{\mathcal {H}}^1(x) =\int _{\Gamma }h(x)\tau (x)\, \textrm{d}{\mathcal {H}}^1(x), \end{aligned}$$

so that \(\textrm{d}\nu _{\Gamma }(x)=\tau (x)\, \textrm{d}{\mathcal {H}}^1\big \vert _{\Gamma }(x)\). In view of the upper bound

$$\begin{aligned} \int _{\Gamma }|\tau (x)|\,\textrm{d}{\mathcal {H}}^1(x)&\le \int _{\Gamma }\#\big \{t\in \gamma ^{-1}(x)\big \}\textrm{d}{\mathcal {H}}^1(x)\\&=\int _I |\gamma '(t)|\,\textrm{d}t =|\Gamma |, \end{aligned}$$

and of the rectifiability of \(\Gamma \), we have \(\tau \in L^1({\mathcal {H}}^1\vert _{\Gamma })\).

We next claim that

$$\begin{aligned} \frac{{\mathcal {H}}^1\vert _{\Gamma }({{\mathbb {D}}}(x,r))}{2r}\xrightarrow {r\rightarrow 0} 1. \end{aligned}$$
(3.5)

Indeed, the upper bound follows from the density bound [16, Theorem 6.2] for general rectifiable sets, and the lower bound is a consequence of the fact that \(\Gamma \) has a tangent at \({\mathcal {H}}^1\)-a.e. \(x\in \Gamma \).

Since \({\mathcal {H}}^1\vert _{\Gamma }\) is a Borel regular measure (see [16, p. 57]), we may apply the Lebesgue-Besicovitch differentiation theorem ([5, Theorem 1.32]) along with (3.5) to obtain

$$\begin{aligned} \lim _{r\rightarrow 0} \frac{\nu _{\Gamma }({{\mathbb {D}}}(x,r))}{2r}= \lim _{r\rightarrow 0} \frac{1}{{\mathcal {H}}^1\vert _{\Gamma }({{\mathbb {D}}}(x,r))} \int _{\Gamma \cap {{\mathbb {D}}}(x,r)}\tau (x) \textrm{d}{\mathcal {H}}^1(x) \cdot \frac{{\mathcal {H}}^1\vert _{\Gamma }({{\mathbb {D}}}(x,r))}{2r} =\tau (x)\nonumber \\ \end{aligned}$$
(3.6)

for \({\mathcal {H}}^1\)-a.e. \(x\in {{\mathbb {C}}}\).

Observe next that

$$\begin{aligned} \frac{1}{\varepsilon }\int _0^\varepsilon \frac{\nu _{\Gamma } \left( {{\mathbb {D}}}(x,r)\right) }{2r}\textrm{d}r=\int _0^1 \frac{\nu _{\Gamma }\left( {{\mathbb {D}}}(x,\varepsilon t)\right) }{2\varepsilon t}\textrm{d}t, \end{aligned}$$

and in view of the weak Ahlfors regularity of \(\Gamma \), the integrand on the right-hand side is bounded above by 1. Hence, the pointwise convergence (3.6) and the bounded convergence theorem together give that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }\int _0^\varepsilon \frac{\nu _{\Gamma }\left( {{\mathbb {D}}}(x,r)\right) }{2r}\textrm{d}r =\int _0^1 \tau (x)\textrm{d}t=\tau (x) \end{aligned}$$

for \({\mathcal {H}}^1\)-a.e. \(x\in {{\mathbb {C}}}\). This completes the proof. \(\square \)

Remark 3.3

In view of Lemma 3.2, the net tangent \(\tau _\Gamma \) of the weakly Ahlfors regular rectifiable curve \(\Gamma \) (i.e., the density of \({\textrm{d}z}\) along \(\Gamma \) with respect to Hausdorff measure \({\mathcal {H}}^1\)) belongs to \(L^\infty ({{\mathbb {C}}},\textrm{d}{\mathcal {H}}^1)\).

Furthermore, since \(\Gamma _1\) and \(\Gamma _2\) are assumed to be rectifiable, the angle between the tangents is either 0 or \(\pi \) at \({\mathcal {H}}^1\)-a.e. point where the curves intersect. Indeed, the set of points in the intersection for which neither of the curves have a unimodular tangent has \({\mathcal {H}}^1\)-measure 0. Furthermore, for any point in \(\Gamma _1\cap \Gamma _2\) where both curves have unimodular tangent and the angle of intersection is not 0 or \(\pi \), there exists a punctured neighborhood where the curves do not intersect. Therefore, the set of such points is at most countable.

Combining both observations with the definition (1.4) of \({\mathcal {L}}(\Gamma _1,\Gamma _2)\) and formula (3.3) for the net tangent \(\tau _\Lambda \), we arrive at the formula

$$\begin{aligned} {\mathcal {L}}(\Gamma _1,\Gamma _2)&= \int _{{{\mathbb {C}}}}\bigg (\sum _{s\in \gamma _1^{-1}(z),\; t\in \gamma _2^{-1}(z)} {\text {Re}}\big (\gamma _1^\prime (s) \,\overline{\gamma _2^\prime (t)}\big )\bigg )\,\textrm{d}{\mathcal {H}}^1(z) \\&=\int _{{{\mathbb {C}}}}\bigg (\sum _{s\in \gamma _1^{-1}(z),\; t\in \gamma _2^{-1}(z)} \gamma _1^\prime (s)\,\overline{\gamma _2^\prime (t)}\bigg )\,\textrm{d}{\mathcal {H}}^1(z)\\&=\int _{\Gamma _1\cap \Gamma _2}\tau _{\Gamma _1}(x) \overline{\tau _{\Gamma _2}(x)} \, \textrm{d}{\mathcal {H}}^1(x) \, . \end{aligned}$$

In particular, we see that the signed length \({\mathcal {L}}(\Gamma _1,\Gamma _2)\) is finite whenever \(\Gamma _1\) and \(\Gamma _2\) are weakly Ahlfors regular rectifiable curves.

4 The Covariance Structure of \({\mathcal {E}}_\Lambda (\Gamma )\)

The purpose of this section is to establish the basic formula (1.7) for the covariance of the action of \(V_\Lambda \) along two weakly Ahlfors regular rectifiable curves. The starting point is the following result taken from Proposition 5.9 and Remark 5.11 in Part I.

Lemma 4.1

Assume that \((1+t^2)k_\Lambda \in L^1({{\mathbb {R}}}_{\ge 0}, \textrm{d}t)\). Then for \(\varphi , \psi \in {\mathcal {S}}\), we have

$$\begin{aligned} \textsf{Cov}\left[ V_\Lambda (\varphi ),V_\Lambda (\psi )\right] =\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}\varphi (x)\overline{\psi (y)}K(|x-y|) \, \textrm{d}m(x) \, \textrm{d}m(y), \end{aligned}$$
(4.1)

where the kernel \(K=K_{V_\Lambda }\) (the two-point function for \(V_\Lambda \)) is given by

$$\begin{aligned} K(z)=-4\pi ^2\int _0^\infty \log _+ \left( \frac{r}{|z|}\right) k_\Lambda (r)\,r \,\textrm{d}r, \qquad z\in {{\mathbb {C}}}\setminus \{0\}. \end{aligned}$$

Juxtaposing (4.1) with the formula for the covariance on the Fourier side:

$$\begin{aligned} \textsf{Cov}\left[ V_\Lambda (\varphi ),V_\Lambda (\psi )\right] = \int _{{{\mathbb {C}}}} \widehat{\varphi }(\xi ) \, \overline{\widehat{\psi }(\xi )} \, \frac{\textrm{d}\rho _{\Lambda }(\xi )}{|\xi |^2} \end{aligned}$$

(see Theorem 5.8, Part I), we conclude that

$$\begin{aligned} {\widehat{K}}(\xi ) = |\xi |^{-2} h_{\Lambda }\,, \end{aligned}$$
(4.2)

where \(h_\Lambda \) is the (radial) density of \(\rho _\Lambda \). The Fourier transform in (4.2) should be understood in the sense of distributions, or alternatively in the sense of Fourier transform acting on \(L^2({{\mathbb {C}}},m)\) functions.

For the remainder of this section, we will work in somewhat greater generally with observables \(\mu (V_\Lambda )\), where \(\mu \) is an admissible measure (recall Definition 2.1 above). In order to show that the variance of \(\mu (V_\Lambda )\) is well defined, we will approximate \(\mu \) by \(\mu _\varepsilon =\mu *\phi _\varepsilon \), where \(\phi _\varepsilon (z)\) are the Gaussians from (2.1).

Lemma 4.2

Assume that \(\mu \) and \(\nu \) are admissible complex-valued measures. Then we have \(\mu (V_\Lambda ), \nu (V_\Lambda )\in L^2(\Omega ,{\mathbb {P}})\), and

$$\begin{aligned} \textsf{Cov}\left[ \int _{{{\mathbb {C}}}}V_\Lambda (z) \, {\textrm{d}\mu }(z), \int _{{{\mathbb {C}}}}V_\Lambda (z) \, \textrm{d}\nu (z)\right] =\lim _{\varepsilon \rightarrow 0}\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}K(|x-y|) \, {\textrm{d}\mu }_\varepsilon (x) \, \textrm{d}\bar{\nu }_\varepsilon (y). \end{aligned}$$

Proof

It will suffice to show that \(\mu (V_\Lambda )=\int V_\Lambda {\textrm{d}\mu }\in L^2(\Omega ,{\mathbb {P}})\) and that

$$\begin{aligned} \textsf{Var}\left[ \mu (V_\Lambda )\right] =\lim _{\varepsilon \rightarrow 0}\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}K(|x-y|) \, \textrm{d}\mu _\varepsilon (x) \, \textrm{d}\bar{\mu }_\varepsilon (y). \end{aligned}$$

Since \(\mu _\varepsilon \) is absolutely continuous with a density in \({\mathcal {S}}\), by (4.2) we have

$$\begin{aligned} \textsf{Var}\left[ \mu _\varepsilon (V_\Lambda )\right] =\int _{{{\mathbb {C}}}}|\widehat{\mu }_\varepsilon (\xi )|^2 \, \frac{\textrm{d}\rho _\Lambda (\xi )}{|\xi |^2} =\int _{{{\mathbb {C}}}}e^{-\varepsilon ^2|\xi |^2}|\widehat{\mu }(\xi )|^2 \, \frac{\textrm{d}\rho _\Lambda (\xi )}{|\xi |^2}. \end{aligned}$$

Notice that the integrand is monotonically increasing as \(\varepsilon \) decreases to 0, so it will be sufficient to establish that

$$\begin{aligned} \sup _{\varepsilon >0}\int _{{{\mathbb {C}}}} e^{-\varepsilon ^2|\xi |^2}|\widehat{\mu }(\xi )|^2 \, \frac{\textrm{d}\rho _\Lambda (\xi )}{|\xi |^2}<\infty . \end{aligned}$$
(4.3)

Indeed, it will then follow from monotone convergence that \(\textsf{Var}\left[ \mu _\varepsilon (V_\Lambda )\right] \) converges to \(\Vert \widehat{\mu }\Vert _{L^2(|\xi |^{-2}\rho _\Lambda )}^2\) as \(\varepsilon \rightarrow 0\), which implies that

$$\begin{aligned} \Vert \widehat{\mu }_\varepsilon -\widehat{\mu }_\delta \Vert _{L^2({{\mathbb {C}}},|\xi |^{-2}\rho _\Lambda )}^2 =\int _{{{\mathbb {C}}}}\Big (e^{-\varepsilon ^2|\xi |^2}+e^{-\delta ^2|\xi |^2} -2e^{-\frac{\varepsilon ^2+\delta ^2}{2}|\xi |^2}\Big ) |\widehat{\mu }(\xi )|^2 \, \frac{\textrm{d}\rho _\Lambda (\xi )}{|\xi |^2}\rightarrow 0 \end{aligned}$$

as \(\varepsilon ,\delta \rightarrow 0\). Hence, we see that \(\displaystyle \mu (V_\Lambda )=\lim _{\varepsilon \rightarrow 0} \, \mu _\varepsilon (V_\Lambda )\) in \(L^2(\Omega ,{\mathbb {P}})\), and the variance may be expressed as

$$\begin{aligned} \textsf{Var}[\mu (V_\Lambda )]=\lim _{\varepsilon \rightarrow 0}\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}} K(|x-y|) \, {\textrm{d}\mu }_\varepsilon (x) \, \textrm{d}\bar{\mu }_\varepsilon (y). \end{aligned}$$

In order to see why (4.3) holds, notice that by Lemma 4.1 we have

$$\begin{aligned} \textsf{Var}\left[ \mu _\varepsilon (V_\Lambda )\right]&=\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}K(|x-y|) \, {\textrm{d}\mu }_\varepsilon (x) \, \textrm{d}\bar{\mu }_\varepsilon (y)\nonumber \\&=-4\pi ^2 \iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}} \left( \int _{0}^\infty \log _+\frac{t}{|x-y|}k_\Lambda (t)\,t\,\textrm{d}t\right) {\textrm{d}\mu }_\varepsilon (x) \, \textrm{d}\bar{\mu }_\varepsilon (y). \end{aligned}$$
(4.4)

Note that the measure \(|\mu _\varepsilon |\) is absolutely continuous with density in \({\mathcal {S}}\), so we may integrate against \(|\mu _\varepsilon |\times |\mu _\varepsilon |\) to get

$$\begin{aligned}{} & {} \iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}\int _0^\infty \log _+\frac{t}{|x-y|}|k_\Lambda (t)|\,t\,\textrm{d}|\mu _\varepsilon |(x) \, \textrm{d}|\mu _\varepsilon |(y) \, \textrm{dt}\\{} & {} \quad \le \iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}\int _0^\infty \left( \big |\log |x-y|\big |+\big |\log t\big |\right) |k_\Lambda (t)|\,t\,\textrm{d}|\mu _\varepsilon |(x) \, \textrm{d}|\mu _\varepsilon |(y) \, \textrm{dt}\\{} & {} \quad \lesssim _\varepsilon \int _0^\infty \big (1+|\log t|\big )|k_\Lambda (t)|\,t\,\textrm{d}t<\infty . \end{aligned}$$

We may thus apply Fubini’s theorem to the right-hand side of (4.4) to get

$$\begin{aligned} \textsf{Var}\left[ \mu _\varepsilon (V_\Lambda )\right] = -4\pi ^2\int _{{{\mathbb {C}}}}\int _0^\infty t^2k_\Lambda (t)I_\varepsilon (t,x)\,\textrm{d}t \, \textrm{d}\mu _\varepsilon (x), \end{aligned}$$

where

$$\begin{aligned} I_\varepsilon (t,x) =\frac{1}{t}\int _{{\mathbb {C}}}\log _+\frac{t}{|x-y|}\,\textrm{d}{\bar{\mu }}_\varepsilon (y) =\frac{1}{t}\int _0^\infty \bar{\mu }_\varepsilon \left( {{\mathbb {D}}}(x,te^{-s})\right) \, \textrm{d}s, \end{aligned}$$

the last equality being a consequence of the “layer cake formula”; that is, integration with respect to the distribution function. Hence, by the triangle inequality, we get that

$$\begin{aligned} \textsf{Var}\left[ \mu _\varepsilon (V_\Lambda )\right] \le 4\pi ^2\int _{{{\mathbb {C}}}}\int _0^\infty t^2\,|k_\Lambda (t)|\, |I_\varepsilon (t,x)| \, \textrm{d}t\,\textrm{d}|\mu _\varepsilon |(x). \end{aligned}$$

By Claim 2.2, the measure \(\mu _\varepsilon \) is admissible, with admissibility constant independent of \(\varepsilon \). Therefore,

$$\begin{aligned} |I_\varepsilon (t,x)|\le \frac{1}{t}\int _0^t \frac{|\mu _\varepsilon \left( {{\mathbb {D}}}(x,r)\right) |}{r}\textrm{d}r\le C_\mu , \end{aligned}$$

and, as a consequence,

$$\begin{aligned} \textsf{Var}\left[ \mu _\varepsilon (V_\Lambda )\right] \le C_\mu |\mu _\varepsilon |({{\mathbb {C}}})\int _0^\infty t^2\,|k_\Lambda (t)| \, \textrm{d}t, \end{aligned}$$

which is readily seen to be bounded above independently of \(\varepsilon \) by use of the trivial bound \(|\mu *\phi _\varepsilon |({{\mathbb {C}}}) \le |\mu |({{\mathbb {C}}})\int _{{\mathbb {C}}}\phi _\varepsilon (z)\textrm{d}m(z)=|\mu |({{\mathbb {C}}})\). \(\square \)

Since in the end we prefer to think in terms of the principal value integral, we should check that both regularizations of the integral \(\displaystyle {\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}K(|x-y|) \, \textrm{d}\mu (x) \, \textrm{d}{\bar{\mu }}(y)}\) give the same result.

Lemma 4.3

For any two admissible measures \(\mu \) and \(\nu \), we have that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}} K(|x-y|) \, {\textrm{d}\mu }_\varepsilon (x) \, \textrm{d}\bar{\nu }_\varepsilon (y)&=\lim _{\varepsilon \rightarrow 0}\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}} {1\hspace{-2.5pt}\textrm{l}}_{\{|x-y|>\varepsilon \}} K(|x-y|)\, \textrm{d}\mu (x) \, \textrm{d}\bar{\nu }(y) \\&\overset{\textsf{def}}{=}\mathsf{p.v.}\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}K(|x-y|) \, {\textrm{d}\mu }(x) \, \textrm{d}\bar{\nu }(y). \end{aligned}$$

Proof

Again, by polarization, it suffices to check the condition for \(\mu =\nu \). Recall that the convolution of two centered Gaussians is again a centered Gaussian;

$$\begin{aligned} \phi _\varepsilon *\phi _\varepsilon =\phi _{\sqrt{2}\varepsilon }. \end{aligned}$$
(4.5)

Thinking of K as a function in \({{\mathbb {C}}}\) by the identification \(K(z)=K(|z|)\), we rewrite the regularized variance as

$$\begin{aligned} \iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}} K(|x-y|)\, {\textrm{d}\mu }_\varepsilon (x) \, \textrm{d}\bar{\mu }_\varepsilon (y)&= \int _{{\mathbb {C}}}[K*\phi _\varepsilon *{\bar{\mu }}]\,[\phi _\varepsilon *\mu ] \, \textrm{d}m\\&= \int _{{\mathbb {C}}}[K*\phi _\varepsilon *{\bar{\mu }}]*\phi _\varepsilon \,{\textrm{d}\mu } \end{aligned}$$

where we have used the general distributional identity

$$\begin{aligned} \left( f, g*h\right) =\left( f*g, h\right) \end{aligned}$$

applied to \(f=K*\phi _\varepsilon \), \(g=\phi _\varepsilon \) and \(h=\mu \) to arrive at the last equality. Using the associativity of convolution along with the identity (4.5), we recognize this as

$$\begin{aligned} \int _{{\mathbb {C}}}[K*\phi _\varepsilon *{\bar{\mu }}]*\phi _\varepsilon \,{\textrm{d}\mu } =\int _{{{\mathbb {C}}}} \left( \int _{{{\mathbb {C}}}}K(|x-y|) \, \textrm{d}{\bar{\mu }}_{\sqrt{2}\varepsilon }(y)\right) {\textrm{d}\mu }(x). \end{aligned}$$

The quantity of interest is thus

$$\begin{aligned}{} & {} \int _{{{\mathbb {C}}}}\bigg (\int _{{{\mathbb {C}}}}K(|x-y|) \, \textrm{d}\bar{\mu }_{\sqrt{2}\varepsilon }(y)-\int _{|x-y|>\varepsilon } K(|x-y|) \, \textrm{d}{\bar{\mu }}(y)\bigg ) {\textrm{d}\mu }(x)\\{} & {} \quad =-4\pi ^2\int _{{{\mathbb {C}}}}\int _0^\infty k_\Lambda (t)t^2 I^\Delta _\varepsilon (t,x)\, \textrm{d}t \, \textrm{d}\mu _\varepsilon (x) \end{aligned}$$

where

$$\begin{aligned} I^\Delta _\varepsilon (t,x)&=\frac{1}{t}\int _0^\infty \left[ \mu _{\sqrt{2}\varepsilon }\left( {{\mathbb {D}}}(x,t e^{-s})\right) -\mu \left( {{\mathbb {D}}}(x,te^{-s})\setminus {{\mathbb {D}}}(x,\varepsilon )\right) \right] \textrm{d}s\\&=\frac{1}{t} \int _0^t\left( \frac{\mu _{\sqrt{2}\varepsilon }\left( {{\mathbb {D}}}(x,r)\right) }{r} -\frac{\mu \left( {{\mathbb {D}}}(x,r)\setminus {{\mathbb {D}}}(x,\varepsilon )\right) }{r}\right) \textrm{d}r. \end{aligned}$$

By the admissibility assumption and by the fact that convolution preserves weak Ahlfors regularity (Claim 2.2), the integrand on the right-hand side is uniformly bounded, as

$$\begin{aligned} \bigg |\frac{\mu _{\sqrt{2}\varepsilon }\left( {{\mathbb {D}}}(x,r)\right) }{r} -\frac{\mu \left( {{\mathbb {D}}}(x,r)\setminus {{\mathbb {D}}}(x,\varepsilon )\right) }{r}\bigg | \le \frac{|\mu _{\sqrt{2}\varepsilon }({{\mathbb {D}}}(x,r))|}{r} + \frac{|\mu ({{\mathbb {D}}}(x,r))|}{r} \le 2 \, C_\mu \,. \end{aligned}$$

Therefore, we can apply the bounded convergence theorem for each fixed \(t>0\) and get

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}I^\Delta _\varepsilon (t,x) = 0 \end{aligned}$$

for \({\mathcal {H}}^1\)-a.e. x. Hence, in view of the condition that \(t^2 k_\Lambda (t)\in L^1({{\mathbb {R}}}_{\ge 0},\textrm{d}t)\), the claim follows from the dominated convergence theorem. \(\square \)

5 Proof of the Main Results

5.1 The Asymptotic Covariance Structure of the Electric Action

Recall that Theorem 1.3 asserts that if the two-point function \(k_\Lambda \) is radial and satisfies \((1+t^2)k_\Lambda (t)\in L^1({{\mathbb {R}}}_{\ge 0},\textrm{d}t)\), then for any weakly Ahlfors regular rectifiable curves \(\Gamma _1\) and \(\Gamma _2\) we have

$$\begin{aligned} \textsf{Cov}\left[ \int _{R\Gamma _1} V_\Lambda (z)\,\textrm{d}z,\int _{R\Gamma _2} V_\Lambda (z)\,\textrm{d}z\right] = R\left( C_\Lambda +o(1)\right) {\mathcal {L}}(\Gamma _1,\Gamma _2) \end{aligned}$$

as \(R\rightarrow \infty \), where \(\displaystyle {C_\Lambda =-8\pi ^2\int _0^\infty k_\Lambda (t)\, t^2\,\textrm{d}t}\).

Proof of Theorem 1.3

We will show that for any two admissible measures \(\mu \) and \(\nu \) such that the limit

$$\begin{aligned} \tau _\nu (x)\overset{\textsf{def}}{=}\lim _{\varepsilon \rightarrow 0}\frac{1}{\varepsilon } \int _0^\varepsilon \frac{\nu ({{\mathbb {D}}}(x,r))}{2r}\textrm{d}r \end{aligned}$$
(5.1)

exists \({\mathcal {H}}^1\)-a.e., we have

$$\begin{aligned} \textsf{Cov}\left[ \int _{{{\mathbb {C}}}} V_\Lambda (Rz)\,{\textrm{d}\mu }(z), \int _{{{\mathbb {C}}}} V_\Lambda (Rz)\, \textrm{d}\nu (z)\right] =\left( C_\Lambda +o(1)\right) \,R^{-1}\int _{{{\mathbb {C}}}}\overline{\tau _\nu (z)}\,{\textrm{d}\mu }(z). \end{aligned}$$
(5.2)

Theorem 1.3 will follow by combining (5.2) with Lemma 3.2, which states that (5.1) holds for the complex-valued measure \(\nu _\Gamma \), which was defined by \(\nu _\Gamma (f)=\int _\Gamma f\,\textrm{d}z\). This measure is clearly admissible, since \(\Gamma \) is assumed to be weakly Ahlfors regular. We note that

$$\begin{aligned} \int _{\Gamma _1\cap \Gamma _2}\overline{\tau _{\Gamma _2}(x)}\,\textrm{d}\nu _{\Gamma _1}(x) = \int _{\Gamma _1\cap \Gamma _2} \tau _{\Gamma _1}(x) \, \overline{\tau _{\Gamma _2}(x)} \, \textrm{d} {\mathcal {H}}^1(x)={\mathcal {L}}(\Gamma _1,\Gamma _2), \end{aligned}$$

see Remark 3.3.

Putting together the two formulas from Lemmas 4.2 and 4.3 for the regularized covariance, we have

$$\begin{aligned} \textsf{Cov}\left[ \int _{{{\mathbb {C}}}} V_\Lambda (R z) \, {\textrm{d}\mu }(z), \int _{{{\mathbb {C}}}} V_\Lambda ( Rz) \, \textrm{d}\nu (z)\right]&=\mathsf{p.v.}\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}K(R|x-y|) \, {\textrm{d}\mu }(x) \, \textrm{d}\bar{\nu }(y)\\&=\lim _{\varepsilon \rightarrow 0}\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}{1\hspace{-2.5pt}\textrm{l}}_{\{|x-y|>\varepsilon \}} K(R|x-y|) \, {\textrm{d}\mu }(x)\, \textrm{d}\bar{\nu }(y). \end{aligned}$$

For any \(\varepsilon >0\), we may write the truncated covariance integral as

$$\begin{aligned}{} & {} \iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}{1\hspace{-2.5pt}\textrm{l}}_{\{|x-y|>\varepsilon \}} K(R|x-y|)\, {\textrm{d}\mu }(x) \, \textrm{d}\bar{\nu }(y)\\{} & {} \quad =-4 \pi ^2\int _{{{\mathbb {C}}}}\int _{|x-y|>\varepsilon } \int _0^\infty \log _+\frac{t/R}{|x-y|}k_\Lambda (t)\,t \, \textrm{d}t\, {\textrm{d}\mu }(x) \, \textrm{d}\bar{\nu }(y). \end{aligned}$$

This integral is absolutely convergent, and an application of Fubini’s theorem gives that

$$\begin{aligned} \iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}}{1\hspace{-2.5pt}\textrm{l}}_{\{|x-y|>\varepsilon \}} K(R|x-y|) \, {\textrm{d}\mu }(x) \, \textrm{d}\bar{\nu }(y) =\frac{1}{R}\int _{{{\mathbb {C}}}}I_{R,\varepsilon }(x) \, {\textrm{d}\mu }(x) \end{aligned}$$

where

$$\begin{aligned} I_{R,\varepsilon }(x)&=-4\pi ^2\int _0^\infty k_\Lambda (t)\,t^2 \left( \frac{R}{t}\int _{|x-y|>\varepsilon } \log _+\frac{t/R}{|x-y|}\, \textrm{d}\bar{\nu }(y)\right) \, \textrm{d}t\nonumber \\&\overset{\textsf{def}}{=}-4\pi ^2\int _0^\infty k_\Lambda (t)\,t^2 J_{R,t,\varepsilon }(x) \textrm{d}t. \end{aligned}$$
(5.3)

Integrating with respect to the distribution function, we arrive at

$$\begin{aligned} J_{R,t,\varepsilon }(x)&= \frac{R}{t} \int _{|x-y|>\varepsilon }\log _+\frac{t/R}{|x-y|}\textrm{d}\bar{\nu }(y)\\&=\frac{1}{t/R}\int _{0}^\infty \bar{\nu } \left( \left\{ y: \varepsilon<|x-y|<\frac{t}{R}e^{-s}\right\} \right) \textrm{d}s \\ {}&=\frac{1}{t/R}\int _{\varepsilon }^{t/R} \frac{\bar{\nu }\left( {{\mathbb {D}}}(x,r)\setminus {{\mathbb {D}}}(x,\varepsilon )\right) }{r}\textrm{d}r. \end{aligned}$$

By the definition of admissible measures, we have

$$\begin{aligned} |J_{R,t,\varepsilon }(x)|\le C \end{aligned}$$

uniformly in xR t and \(\varepsilon \). What is more, the existence of \(\displaystyle \lim _{\varepsilon \rightarrow 0}J_{R,t,\varepsilon }(x)=J_{R,t,0}(x)\) for all x follows from admissibility, while the assumption (5.1) together with admissibility of \(\nu \) ensure that for any fixed \(t>0\), we have

$$\begin{aligned} J_{R,t,0}(x)=\frac{1}{t/R}\int _{0}^{t/R} \frac{\bar{\nu }\left( {{\mathbb {D}}}(x,r)\right) }{r} \textrm{d}r \xrightarrow {R\rightarrow \infty } 2\overline{\tau _\nu (x)} \end{aligned}$$

for \({\mathcal {H}}^1\)-a.e. \(x \in {{\mathbb {C}}}\). Hence, we may apply the dominated convergence theorem to the integral (5.3) to find

$$\begin{aligned} \lim _{R\rightarrow \infty } \lim _{\varepsilon \rightarrow 0}I_{R,\varepsilon }(x) = -8\pi ^2 \int _0^\infty k_\Lambda (t)\,t^2 \textrm{d}t\;\overline{\tau _\nu (x)} \overset{\textsf{def}}{=}C_\Lambda \overline{\tau _\nu (x)} \,. \end{aligned}$$

We also have for free the bound

$$\begin{aligned} |I_{R,\varepsilon }(x)|\le \int _0^\infty t^2|k_\Lambda (t)| \, |J_{R,t,\varepsilon }(x)|\textrm{d}t\le C\int _0^\infty t^2 k_\Lambda (t)\textrm{d}t \end{aligned}$$

for \({\mathcal {H}}^1\)-a.e. \(x\in {{\mathbb {C}}}\). If we decompose \(\mu \) as \(\mu =\mu ^+_{1} -\mu ^{-}_{1} +\textrm{i} \, \mu _2^+-\textrm{i} \, \mu _2^{-}\) where each \(\mu _i^\pm \) is a finite positive measure, the bounded convergence theorem applied to each of the four integrals then gives

$$\begin{aligned} \mathsf{p.v.}\iint _{{{\mathbb {C}}}\times {{\mathbb {C}}}} K(R|x-y|){\textrm{d}\mu }(x)\textrm{d}\bar{\nu }(y)&=-\lim _{\varepsilon \rightarrow 0} \frac{1}{R}\int _{{{\mathbb {C}}}} I_{R,\varepsilon }(x) \, {\textrm{d}\mu }(x)\\&= \left( C_\Lambda +o(1)\right) R^{-1} \int _{{{\mathbb {C}}}} \overline{\tau _\nu (x)} \, {\textrm{d}\mu }(x) \end{aligned}$$

as \(R\rightarrow \infty \), as claimed. \(\square \)

Remark 5.1

The constant \(C_\Lambda \) in Theorem 1.3 is given by

$$\begin{aligned} C_\Lambda =-8\pi ^2\int _0^\infty k_\Lambda (t)t^2\,\textrm{d}t. \end{aligned}$$

From this formula, it is not immediately clear that it is positive, but another representation clarifies matters. Under the conditions of the theorem, the spectral measure \(\rho _\Lambda \) has a radial density \(h_\Lambda \), and we have

$$\begin{aligned} h_\Lambda =c_\Lambda +{\widehat{k}}_\Lambda , \end{aligned}$$

where

$$\begin{aligned} {\widehat{k}}_\Lambda (\xi )=\int _{{{\mathbb {C}}}}e^{-2\pi \textrm{i}\xi \cdot z}k_\Lambda (|z|)\,\textrm{d}m(z). \end{aligned}$$

Being the density of a positive measure, \(h_\Lambda \) is certainly positive. Moreover, the zeroth moment condition (1.5) gives that \(h_\Lambda (0)=0\). Since \((1+|z|)\,k_\Lambda (|z|)\) belongs to \(L^1({{\mathbb {C}}},\textrm{d}m)\), we also have \(h_\Lambda (|z|)\in C^1({{\mathbb {C}}})\). Then, by positivity, we find that \(h'_\Lambda (0)=0\) as well. Since moreover \(\Vert h_\Lambda \Vert _{L^\infty }\le |c_\Lambda | +\Vert k_\Lambda (|\cdot |)\Vert _{L^1({{\mathbb {C}}})}\), we have

$$\begin{aligned} \lim _{\tau \rightarrow 0}\frac{1}{\tau }h_\Lambda (\tau )=\lim _{\tau \rightarrow \infty }\frac{1}{\tau }h_\Lambda (\tau )=0, \end{aligned}$$

so integrating by parts we find that

$$\begin{aligned} \int _0^\infty \frac{h_\Lambda (\tau )}{\tau ^2} \,\textrm{d}\tau&=\int _0^\infty \frac{1}{\tau }\partial _\tau \left( \int _0^\infty \int _0^{2\pi } e^{2\pi \textrm{i}r\tau \cos \vartheta }k_\Lambda (r)r\,\textrm{d}r\textrm{d}\vartheta \right) \textrm{d}\tau \\ {}&=2\pi \textrm{i}\int _0^\infty \frac{1}{\tau }\int _0^\infty \int _0^{2\pi } \cos \vartheta e^{2\pi \textrm{i}r\tau \cos \vartheta }r^2 k_\Lambda (r)\,\textrm{d}r\textrm{d}\vartheta \textrm{d}\tau \\&=-4\pi ^2\int _0^\infty \int _0^\infty \frac{1}{\tau }J_1(2\pi r\tau ) r^2 k_\Lambda (r)\,\textrm{d}r\textrm{d}\tau , \end{aligned}$$

where the last equality follows from the identity

$$\begin{aligned} \int _0^{2\pi }\cos (\vartheta )e^{2\pi \textrm{i}r\tau \vartheta }\textrm{d}\vartheta =2\pi \textrm{i}\,J_1(2\pi r\tau ). \end{aligned}$$

But \(\displaystyle {\int _0^\infty \frac{J_1(2\pi r\tau )}{\tau }\textrm{d}\tau =1}\), so an application of Fubini’s theorem gives that

$$\begin{aligned} \int _0^\infty \frac{h_\Lambda (\tau )}{\tau ^2}\,\textrm{d}\tau =-4\pi ^2\int _0^\infty r^2 k_\Lambda (r)\,\textrm{d}r. \end{aligned}$$

Since the left-hand side is clearly positive, it follows that \(C_\Lambda \) is positive as well.

5.2 The Asymptotic Charge Fluctuations in Rectifiable Jordan Domains

Proof of Theorem 1.4

We fix a Jordan domain \({\mathcal {G}}\) with rectifiable boundary \(\Gamma =\partial {\mathcal {G}}\). Recall that Lemma 3.1 asserts that any rectifiable Jordan curve \(\Gamma \) is weakly Ahlfors regular. Moreover, we have

$$\begin{aligned} \textsf{Var}\big [n_\Lambda (R{\mathcal {G}})\big ] =\textsf{Var}\big [\frac{1}{2\textrm{i}}\int _{\partial (R{\mathcal {G}})}V_\Lambda (z) \, \textrm{d}z\big ] =\frac{1}{4}\textsf{Var}\big [{\mathcal {E}}_\Lambda (\partial (R{\mathcal {G}}))\big ]. \end{aligned}$$

As a consequence, an application Theorem 1.3 with \(\Gamma _1=\Gamma _2=\partial {\mathcal {C}}\) gives

$$\begin{aligned} \textsf{Var}\big [n_\Lambda (R{\mathcal {G}})\big ]=\frac{1}{4} R\big (C_\Lambda +o(1)\big ) \, |\partial {\mathcal {G}}| \end{aligned}$$

as \(R\rightarrow \infty \), where \(C_\Lambda \) is the constant \(\displaystyle {-8\pi ^2\int _0^\infty k_\Lambda (t)\,t^2\,\textrm{d}t}\). This completes the proof. \(\square \)

5.3 Logarithmic Asymptotics for the Work

The purpose of this section is prove Theorem 1.6. That is, we need to obtain the asymptotics of the variance of the work

$$\begin{aligned} {\mathcal {W}}_\Lambda (R\Gamma )={\text {Re}}\int _{R\Gamma } V_\Lambda (z)\,{\textrm{d}z}, \end{aligned}$$

of \(V_\Lambda \) along \(R\Gamma \). We recall that for this result, we assume that \(\Lambda \) is an invariant point process subject to the stronger moment condition \((1+t^3)k_\Lambda (t)\in L^1({{\mathbb {R}}}_{\ge 0},\textrm{d}t)\) for the two-point function \(k_\Lambda \) of \(\Lambda \).

Below, we will express the work \({\mathcal {W}}_\Lambda (\Gamma )\) in terms of the increments

$$\begin{aligned} \mathsf{\Delta }_a\Pi _\Lambda (z){\mathop {=}\limits ^\textrm{def}} \Pi _\Lambda (z+a)-\Pi _\Lambda (z), \end{aligned}$$

where \(\Pi _\Lambda \) is the random potential for \(\Lambda \), i.e. the solution to \(\Delta \Pi _\Lambda =2\pi (n_\Lambda -c_\Lambda \hspace{1pt}m)\), defined in Sect. 6 in Part I. The potential is given by \(\Pi _\Lambda (z) {\mathop {=}\limits ^\textrm{def}} \log |F_\Lambda (z)| - \tfrac{1}{2}\, \pi c_\Lambda |z|^2\), where \(F_\Lambda \) is the random entire function

$$\begin{aligned} F_\Lambda (z) = \exp \bigl [ -\Psi _1(\infty )z - \frac{1}{2}\, \Psi _2(\infty )z^2 \bigr ]\, \prod _{|\lambda |<1} (\lambda -z) \prod _{|\lambda |\ge 1} \left( \frac{\lambda -z}{\lambda }\, \exp \Bigl [\,\frac{z}{\lambda }\, + \frac{z^2}{2\lambda ^2}\, \Bigr ]\right) , \end{aligned}$$

where for \(j=1,2\),

$$\begin{aligned} \Psi _j(\infty )=\lim _{R\rightarrow \infty }\sum _{1\le |\lambda |\le R} \frac{1}{\lambda ^j} \end{aligned}$$

with convergence in \(L^2(\Omega ,{\mathbb {P}})\). For \(j=1\), this limit exists under the spectral assumption (1.1); see Lemma 3.3 in Part I. It is worth mentioning that a straightforward computation (see Theorem 6.2 in Part I) shows that

$$\begin{aligned} \mathsf{\Delta }_a\Pi (0)=\lim _{R\rightarrow \infty }\sum _{|\lambda |\le R} \big (\log |a-\lambda |-\log |\lambda |\big )-\tfrac{1}{2} \pi c_\Lambda |a|^2, \end{aligned}$$

where the convergence is locally uniform in a.

Before we proceed, we derive a useful formula for the variance of \(\mathsf{\Delta }_a\Pi (z)\). By Theorem 6.2 in Part I, \(\Pi _\Lambda \) has stationary increments, so it suffices to analyze \(\textsf{Var}\big [\mathsf{\Delta }_a \Pi _\Lambda (0)\big ]\).

Claim 5.2

For any \(a\in {{\mathbb {C}}}\setminus \{0\}\), we have

$$\begin{aligned} \textsf{Var}\big [\mathsf{\Delta }_a\Pi _\Lambda (0)\big ] =\frac{1}{2\pi }\int _{{{\mathbb {C}}}}K(|s|)\Phi (a/s) \, {\textrm{d}}m(s) \end{aligned}$$

where

$$\begin{aligned} \Phi (a/s)\overset{\textsf{def}}{=}2\log |s|-\log |s-a|-\log |s+a|= -\log \left| 1-\frac{a^2}{s^2}\right| \end{aligned}$$
(5.4)

and where \(K(s)=K(|s|)\) is the same as above (defined in Lemma 4.1).

Proof

Let

$$\begin{aligned} \varphi _a(z)=\frac{1}{\pi }\Big (\frac{1}{{\bar{z}}-{\bar{a}}} -\frac{1}{{\bar{z}}}\Big ) \, \end{aligned}$$

and note that \(\partial \varphi _a=\delta _a-\delta _0\). As \(\partial \, \Pi _\Lambda = \frac{1}{2} V_\Lambda \), we get

$$\begin{aligned} \mathsf{\Delta }_a\Pi _\Lambda (0)=\frac{1}{2} V_\Lambda (\varphi _a) \,. \end{aligned}$$
(5.5)

By Remark 5.1, the spectral measure \(\rho _{\Lambda }\) is absolutely continuous with respect to Lebesgue measure and has a \(C^1\)-regular bounded (radial) density \(h_\Lambda \). Furthermore, a simple computation shows that

$$\begin{aligned} \widehat{\varphi }_a(\xi ) =\big (1-e^{-2\pi \textrm{i}a\cdot \xi }\big ) \, \frac{1}{\textrm{i}\overline{\xi }} \,, \end{aligned}$$

in the sense of distributions, and Theorem 5.8 from Part I implies that

$$\begin{aligned} \textsf{Var}\left[ V_\Lambda (\varphi _a)\right] = \int _{{{\mathbb {C}}}} \big |\widehat{\varphi }_a(\xi )\big |^2 \, \frac{{\textrm{d}}\rho _\Lambda (\xi )}{|\xi |^2} = \int _{{{\mathbb {C}}}} \big |\widehat{\varphi }_a(\xi )\big |^2 \, \frac{h_\Lambda (|\xi |)}{|\xi |^2} \,\textrm{d}m(\xi ) < \infty \,. \end{aligned}$$
(5.6)

In particular, by (5.5) we see that \(\mathsf{\Delta }_a\Pi _\Lambda (0) \in L^2(\Omega ,{\mathbb {P}})\) and it remains to prove the formula for \(\textsf{Var}\big [\mathsf{\Delta }_a\Pi _\Lambda (0)\big ]\). By rotational invariance of \(\Lambda \), we may assume that \(a>0\). Recall that the function K is the (distributional) Fourier transform of \(|\xi |^{-2} h_\Lambda \), the spectral measure of \(V_\Lambda \), see (4.2). As both \(|\widehat{\varphi }_a(\xi )|^2\) and \(|\xi |^{-2} h_\Lambda \) are in \(L^2({{\mathbb {C}}},m)\), we may apply the Plancherel identity to (5.6) and get that

$$\begin{aligned} \textsf{Var}\left[ V_\Lambda (\varphi _a)\right] = \int _{{{\mathbb {C}}}}\big (\varphi _a *\overline{\varphi }_a \big ) (s) \, K(|s|) \, {\textrm{d}}m(s) \,. \end{aligned}$$
(5.7)

To compute the convolution \(\varphi _a*{\overline{\varphi }}_a\), observe that

$$\begin{aligned} \big (\varphi _a *\overline{\varphi }_a \big ) (s) = \frac{1}{\pi ^2}\int _{{{\mathbb {C}}}} \Big (\frac{1}{{\bar{z}}-a}-\frac{1}{{\bar{z}}}\Big ) \Big (\frac{1}{z-s-a}-\frac{1}{z-s}\Big ) \, \textrm{d}m(z)\,, \end{aligned}$$

and by expanding the product we see that

$$\begin{aligned} \big (\varphi _a *\overline{\varphi }_a \big ) (s)= & {} \lim _{R\rightarrow \infty }\frac{1}{\pi ^2}\int _{|z|\le R}\Big (\frac{1}{({\bar{z}}-a)(z-a-s)} -\frac{1}{({\bar{z}}-a)(z-s)}\Big ) {\textrm{d}}m(z) \nonumber \\{} & {} +\lim _{R\rightarrow \infty } \frac{1}{\pi ^2}\int _{|z|\le R}\Big (\frac{1}{{\bar{z}}(z-s)} -\frac{1}{{\bar{z}} (z-a-s)}\Big ) {\textrm{d}}m(z) \,. \end{aligned}$$
(5.8)

Now, for any \(\alpha >0\) and \(\beta \in {{\mathbb {C}}}\) we have that

$$\begin{aligned} \int _{|z|\le R}\frac{{\textrm{d}}m(z)}{({\bar{z}}-\alpha )(z-\beta )}&=2\int _{|z|\le R}{\bar{\partial }}\log |z-\alpha | \, \frac{{\textrm{d}}m(z)}{z-\beta } \\&=2\pi \log |\beta -\alpha |-\frac{1}{\textrm{i}} \int _{|z|=R}\frac{\log |z-\alpha |}{z-\beta }\, {\textrm{d}}z\\&=2\pi \log |\beta -\alpha |-2\pi \log R+o(1) \end{aligned}$$

as \(R\rightarrow \infty \). Combining this with (5.8), we find that

$$\begin{aligned} \big (\varphi _a *\overline{\varphi }_a \big ) (s) = \frac{2}{\pi }\big (2\log |s|-\log |s-a|-\log |s+a|\big ) {\mathop {=}\limits ^{(5.4)}} \frac{2}{\pi } \Phi (a/s) \,. \end{aligned}$$

Plugging back into (5.7) gives

$$\begin{aligned} \textsf{Var}\left[ V_\Lambda (\varphi _a)\right] = \frac{2}{\pi }\int _{{{\mathbb {C}}}} K(|s|) \, \Phi (a/s) \, {\textrm{d}}m(s) \end{aligned}$$

which, together with (5.5), gives the claim. \(\square \)

Theorem 5.3

Assume that the truncated two-point function \(k_\Lambda \) satisfies \((1+t^3)k_\Lambda (t)\in L^1({{\mathbb {R}}}_{\ge 0}, {\textrm{d}}t)\). Then

$$\begin{aligned} \textsf{Var}\left[ \mathsf{\Delta }_a\Pi _\Lambda (0)\right] =\left( D_\Lambda +o(1)\right) \log |a|, \end{aligned}$$

as \(a\rightarrow \infty \), where

$$\begin{aligned} D_\Lambda =2\pi ^2\int _{0}^\infty t^3 k_\Lambda (t) \, {\textrm{d}}t. \end{aligned}$$

Proof

By Claim 5.2, we have

$$\begin{aligned} \textsf{Var}\left[ \mathsf{\Delta }_a\Pi _\Lambda (z)\right] =\frac{1}{2\pi }\int _{{{\mathbb {C}}}}K(|s|)\Phi (a/s) \, {\textrm{d}}m(s) \end{aligned}$$
(5.9)

where \(\Phi \) is defined by (5.4) and

$$\begin{aligned} K(s)=-4\pi ^2\int _r^\infty \log \Big (\frac{|s|}{t}\Big )k_\Lambda (t)\, t \, {\textrm{d}}t. \end{aligned}$$

We first note that

$$\begin{aligned} \frac{1}{2\pi }\int _0^{2\pi }\Phi \Big (\frac{a}{re^{\textrm{i}\theta }}\Big ){\textrm{d}}\theta&=2\log r - \frac{1}{2\pi }\int _0^{2\pi } \Big (\log |a-re^{\textrm{i}\theta }| -\log |a+re^{\textrm{i}\theta }|\Big ){\textrm{d}}\theta \\&=-2\log _+\Big (\frac{a}{r}\Big ). \end{aligned}$$

Incorporating this formula into (5.9), we get

$$\begin{aligned} \textsf{Var}\left[ \mathsf{\Delta }_a\Pi _\Lambda (0)\right]&=\int _0^\infty K(r) \left[ \frac{1}{2\pi }\int _0^{2\pi }\Phi \Big (\frac{a}{re^{\textrm{i}\theta }}\Big ){\textrm{d}}\theta \right] r \, {\textrm{d}}r\\&=-2\int _0^\infty K(r)\log _+\Big (\frac{a}{r}\Big )r \, {\textrm{d}}r\\&=8\pi ^2\int _0^\infty \int _0^\infty \left[ \int _{r}^\infty \log \Big (\frac{t}{r}\Big )k_\Lambda (t)t \, {\textrm{d}}t\right] \log _+\Big (\frac{a}{r}\Big )r \, {\textrm{d}}r\\&=8\pi ^2\int _0^\infty k_\Lambda (t)t \left[ \int _0^t\log \Big (\frac{t}{r}\Big ) \log _{+}\Big (\frac{a}{r}\Big )r \, {\textrm{d}}r\right] {\textrm{d}}t. \end{aligned}$$

The inner integral on the right-hand side simplifies to

$$\begin{aligned} \int _0^t\log \Big (\frac{t}{r}\Big )\log _+\Big (\frac{a}{r}\Big )r \, {\textrm{d}}r =t^2\int _0^1 \log \Big (\frac{1}{u}\Big )\log _+\Big (\frac{a}{tu}\Big )u \, {\textrm{d}}u, \end{aligned}$$

whence

$$\begin{aligned} \textsf{Var}\left[ \mathsf{\Delta }_a\Pi _\Lambda (0)\right]&= 8\pi ^2\int _0^\infty k_\Lambda (t)t^3 \left[ \int _0^1 \log \Big (\frac{1}{u}\Big ) \log _{+}\Big (\frac{a}{tu}\Big )u \, {\textrm{d}}u\right] {\textrm{d}}t\\&=8\pi ^2\left( \int _0^a +\int _a^\infty \right) k_\Lambda (t)t^3 \left[ \int _0^1 \log \Big (\frac{1}{u}\Big ) \log _{+}\Big (\frac{a}{tu}\Big )u \, {\textrm{d}}u\right] {\textrm{d}}t\\&\overset{\textsf{def}}{=}8\pi ^2\big (J_1(a)+J_2(a)\big ) \, . \end{aligned}$$

By our assumption \(\displaystyle {\int _0^\infty |k_\Lambda (t)|\,t^3 {\textrm{d}}t<\infty }\),

$$\begin{aligned} J_1&=\int _0^a k_\Lambda (t)t^3 \left[ \int _0^1 \log \Big (\frac{1}{u}\Big ) \left( \log \Big (\frac{a}{t}\Big )+\log \Big (\frac{1}{u}\Big )\right) u \, {\textrm{d}}u\right] {\textrm{d}}t\\&=C_1\int _0^\infty k_\Lambda (t)t^3\log \Big (\frac{a}{t}\Big ){\textrm{d}}t+C_2+o(1) \end{aligned}$$

as \(a\rightarrow \infty \), where \(\displaystyle {C_1=\int _0^1\log (1/u)u \, {\textrm{d}}u=\tfrac{1}{4}}\) and

$$\begin{aligned} C_2=\left( \int _0^\infty k_\Lambda (t)t^3\, {\textrm{d}}t\right) \cdot \left( \int _0^1\log ^2\Big (\frac{1}{u}\Big )u \, {\textrm{d}}u\right) . \end{aligned}$$

Moreover, we note that

$$\begin{aligned} \int _0^\infty k_\Lambda (t) t^3\log \Big (\frac{a}{t}\Big ){\textrm{d}}t =\log a\left[ \int _0^\infty k_\Lambda (t) \, t^3 \, \frac{\log a-\log t}{\log a} {1\hspace{-2.5pt}\textrm{l}}_{[0,a]}(t) \, {\textrm{d}}t\right] , \end{aligned}$$

and the expression in brackets converges to \(\displaystyle {\int _0^\infty k_\Lambda (t)t^3\, {\textrm{d}}t}\) as \(a\rightarrow \infty \) by the dominated convergence theorem. Hence, the integral \(J_1(a)\) satisfies

$$\begin{aligned} J_1(a) =\Big (\frac{1}{4}\int _0^\infty k_\Lambda (t)t^3 \, {\textrm{d}}t\Big )\log a + O(1) \end{aligned}$$

as \(a\rightarrow \infty \). For the second integral \(J_2(a)\), we have

$$\begin{aligned} J_2(a)&=\int _a^\infty k_\Lambda (t) \, t^3 \left[ \int _0^{a/t}\log \Big (\frac{1}{u}\Big )\cdot \log \Big (\frac{a}{tu}\Big )u \, {\textrm{d}}u\right] {\textrm{d}}t\\&=\int _a^\infty k_\Lambda (t)t \left[ a^2\int _0^{1}\log \Big (\frac{t}{av}\Big )\cdot \log \Big (\frac{1}{v}\Big )v \, {\textrm{d}}v\right] {\textrm{d}}t\\&=a^2\int _a^\infty k_\Lambda (t)t\left[ C_3\log \Big (\frac{t}{a}\Big ) +C_4\right] {\textrm{d}}t \, , \end{aligned}$$

for some explicit constants \(C_3\) and \(C_4\). Furthermore, we have the bound

$$\begin{aligned} a^2\int _a^\infty k_\Lambda (t) \, t \log \Big (\frac{a}{t}\Big ) \, {\textrm{d}}t&\le a^2\int _a^\infty k_\Lambda (t) \, t\, \log t \, {\textrm{d}}t\\&=a^2\int _a^\infty k_\Lambda (t) \, \frac{t^3}{t^2\log t} \, {\textrm{d}}t \lesssim \log a\int _a^\infty k_\Lambda (t) \, t^3\, {\textrm{d}}t = o\big (\log a\big ) \end{aligned}$$

as \(a\rightarrow \infty \). Combining the asymptotics for \(J_1(a)\) and \(J_2(a)\) with the identity \(\textsf{Var}\left[ \mathsf{\Delta }_a\Pi _\Lambda (0)\right] =8\pi ^2\big (J_1(a)+J_2(a)\big )\), we find that

$$\begin{aligned} \textsf{Var}\left[ \mathsf{\Delta }_a\Pi _\Lambda (0)\right] =\left( 2\pi ^2\int _0^\infty k_\Lambda (t)t^3 \, {\textrm{d}}t+o(1)\right) \log a \end{aligned}$$

as \(a\rightarrow \infty \), which is what we wanted. \(\square \)

Proof of Theorem 1.6

Assume without loss of generality that \(\Gamma \) starts at the origin and ends at \(a>0\), and let \(\gamma :I\rightarrow \Gamma \) denote a parametrization of \(\Gamma \). Notice first that \({\mathbb {P}}\)-a.s., no point of \(\Lambda \) lies exactly on \(R\Gamma \). Hence, for each \(\lambda \in \Lambda \), we may choose a branch of the logarithm such that \(\log (\gamma (t)-\lambda )^{-1}\) is continuous on I, and we find that

$$\begin{aligned} {\text {Re}}\int _{R\Gamma }\frac{{\textrm{d}z}}{z-\lambda } =\log |Ra-\lambda |-\log |\lambda |. \end{aligned}$$

As a consequence, we find that (see (5.3) for the definition of \(\mathsf{\Delta } \Pi _a(0)\))

$$\begin{aligned} {\mathcal {W}}_\Lambda (R\Gamma )=\lim _{S\rightarrow \infty }\sum _{|\lambda |\le S} \left( \log |Ra-\lambda |-\log |\lambda |\right) -\tfrac{1}{2} c_\Lambda R^2 = \mathsf{\Delta }_{Ra} \Pi _\Lambda (0), \end{aligned}$$

where the convergence is in the sense of \(L^2(\Omega ,{\mathbb {P}})\) (this was justified in Sect. 6, Part I). Hence, by applying Theorem 5.3, we find that

$$\begin{aligned} \textsf{Var}\left[ {\mathcal {W}}_\Lambda (R\Gamma )\right] =\textsf{Var}\left[ \mathsf{\Delta }_{Ra} \Pi _\Lambda (0)\right]&=\left( D_\Lambda +o(1)\right) \log (Ra)\\&=\left( D_\Lambda +o(1)\right) \log R. \end{aligned}$$

This completes the proof. \(\square \)

6 Rectifiable Jordan Arcs with Large Variance

6.1 Nested Disks with Large Charge Fluctuations

In this section, we specialize to the case when \(\Lambda \) is the infinite Ginibre point process. Denote by \(n_\Lambda \) the corresponding point count measure. We will show that for any fixed \(\varepsilon >0\) there exists a rectifiable curve \({\mathcal {C}}_\varepsilon \) such that

$$\begin{aligned} \textsf{Var}\left[ \frac{1}{2\pi \textrm{i}}\int _{R{\mathcal {C}}_\varepsilon } V_\Lambda (z)\,{\textrm{d}z}\right] \gtrsim R^{2-\varepsilon }. \end{aligned}$$
(6.1)

Let \(\ell _k = k^{-1-\varepsilon }\). To describe the desired curve \({\mathcal {C}}_\varepsilon \), we begin with a concatenation of circles (all with positive orientation) \(\{|z| = \ell _k\}\), and add arcs \(\{\textrm{i} zt \mid \ell _k+1 \le t \le \ell _k \}\) which connect subsequent circles along the imaginary axis. The curve \({\mathcal {C}}_\varepsilon \) is not simple nor closed (in the upcoming section, we will deform \({\mathcal {C}}_\varepsilon \) into a simple Jordan arc), but since \((\ell _k)\) is a summable sequence it is rectifiable. By the argument principle,

$$\begin{aligned} \frac{1}{2\pi \textrm{i}} \int _{R{\mathcal {C}}_\varepsilon } V_\Lambda (z)\,{\textrm{d}z} = \sum _{k=1}^{\infty } n_\Lambda ( R\ell _k \,{{\mathbb {D}}})-\int _{0}^R V_\Lambda (\textrm{i}w) \, \textrm{d}w. \end{aligned}$$

Since the variance of the last term on the right-hand side is of order O(R) (for instance, by Theorem 1.3), the lower bound (6.1) will follow once we show that

$$\begin{aligned} \textsf{Var}\left[ \sum _{k=1}^{\infty } n_\Lambda ( R\ell _k \, {{\mathbb {D}}})\right] \gtrsim R^{2-\varepsilon }, \end{aligned}$$
(6.2)

which we will do by applying Kostlan’s theorem, a result which is specific for radially symmetric determinantal point processes such as the infinite Ginibre ensemble.

Theorem 6.1

([7, Theorem 4.7.1]) Let \(\{\,|\lambda _j|: \lambda _j \in \Lambda \}\) be the set of absolute values of the Ginibre process ordered by non-decreasing modulus. Then \(|\lambda _j|\) are independent with

$$\begin{aligned} |\lambda _j|^2 \sim \textsf {Gamma} (j,1). \end{aligned}$$

Here, \(\textsf {Gamma} (j,1)\) denotes the standard Gamma distribution with density given by \(f_j(x)=\frac{1}{\Gamma (j)}x^{j-1}e^{-x}\), \(x>0\). We use Theorem 6.1 to prove (6.2). Indeed,

$$\begin{aligned} \sum _{k=1}^{\infty } n_\Lambda ( R\ell _k \, {{\mathbb {D}}}) = \sum _{k=1}^{\infty } \left( \sum _{j=1}^{\infty } {1\hspace{-2.5pt}\textrm{l}}_{\{|\lambda _j|^2 \le (R \ell _k)^2\}}\right) = \sum _{j=1}^{\infty }\left( \sum _{k=1}^{\infty } {1\hspace{-2.5pt}\textrm{l}}_{\{|\lambda _j|^2 \le (R \ell _k)^2 \}}\right) \end{aligned}$$

and since \(\{|\lambda _j|^2\}_{j\ge 1}\) are independent we get the lower bound

$$\begin{aligned} \textsf{Var}\left[ \sum _{k=1}^{\infty } n_\Lambda (R\ell _k \, {{\mathbb {D}}})\right] \ge \textsf{Var}\left[ \sum _{k=1}^{\infty } {1\hspace{-2.5pt}\textrm{l}}_{\{Z\le (R \ell _k)^2 \}}\right] \end{aligned}$$

where \(Z=|\lambda _1|^2 \sim \textsf {exp} (1)\overset{\textsf{def}}{=}\textsf{Gamma}(1,1)\). In view of the above, the lower bound (6.2) follows from the following simple claim.

Claim 6.2

If \(Z\sim \textsf {exp} (1)\) then

$$\begin{aligned} \textsf{Var}\left[ \sum _{k=1}^{\infty } {1\hspace{-2.5pt}\textrm{l}}_{\{Z \le (R \ell _k)^2 \} }\right] \gtrsim R^{2-\varepsilon }. \end{aligned}$$

Proof

Denote by

$$\begin{aligned} Y_R = \sum _{k=1}^{\infty } {1\hspace{-2.5pt}\textrm{l}}_{\{Z \le (R \ell _k)^2 \} }. \end{aligned}$$

Since \({\mathbb {P}}(Z\ge t) = \exp (-t)\), we can bound the expectation of \(Y_R\) as

$$\begin{aligned} {\mathbb {E}}[Y_R] = \sum _{k=1}^{\infty } {\mathbb {P}}\left( Z \le (R \ell _k)^2 \right)&= \sum _{k=1}^{\infty }(1-e^{-(R\ell _k)^2}) \\&\le M + R^2\sum _{k=M}^{\infty }\ell _k^2\le M +3R^2 M^{-1-2\varepsilon } \end{aligned}$$

for all \(M\ge 1\). By choosing \(M=\lfloor R^{1/(1+\varepsilon )}\rfloor \) we obtain

$$\begin{aligned} {\mathbb {E}}[Y_R] \le 5 R^{1/(1+\varepsilon )}. \end{aligned}$$

Furthermore, we have the inclusion of events

$$\begin{aligned} \left\{ Y_R \ge 6 R^{1/(1+\varepsilon )} \right\}&\supset \left\{ Z \le \left( R\ell _{\lfloor 6R^{1/(1+\varepsilon )}\rfloor }\right) ^2\right\} \supset \left\{ Z \le 6^{-2(1+\varepsilon )} \right\} \supset \left\{ Z \le 0.01\right\} \,. \end{aligned}$$

Since \(Z\sim \textsf {exp} (1)\), the latter event has strictly positive probability, and combining all together we get

$$\begin{aligned} \textsf{Var}[Y_R]&= {\mathbb {E}}[(Y_R - {\mathbb {E}}[Y_R])^2] \\&\gtrsim R^{2/(1+\varepsilon )} \, {\mathbb {P}}(Y_R \ge 6 R^{1/(1+\varepsilon )})\ge R^{2/(1+\varepsilon )} \, {\mathbb {P}}(Z \le 0.01) \end{aligned}$$

as desired. \(\square \)

6.2 Nested Disks to Spirals

We continue our construction of a rectifiable Jordan arc \(\Gamma _\varepsilon \) with superlinear growth of \(\textsf{Var}[{\mathcal {E}}_\Lambda (\Gamma )]\) when \(\Lambda \) is the infinite Ginibre ensemble. For a given \(\varepsilon >0\), we have found a curve \({\mathcal {C}}={\mathcal {C}}_\varepsilon \) of the form

$$\begin{aligned} {\mathcal {C}}=\textrm{i}[0,\ell _1]\cup \bigcup _{k\ge 1}{{\mathbb {D}}}(0,\ell _k) \end{aligned}$$

such that

$$\begin{aligned} \textsf{Var}\left[ \int _{R{\mathcal {C}}}V_\Lambda (z) \, {\textrm{d}z}\right] \gtrsim R^{2-\varepsilon }. \end{aligned}$$
(6.3)

We want to turn \({\mathcal {C}}\) into a Jordan arc with the bound (6.3) preserved. We do so by forming a spiral \(\Gamma =\cup _{k\ge 1}\gamma _k\), where each revolution \(\gamma _k\) interpolates between the intersection of successive circles \({\mathcal {C}}_k\) and \({\mathcal {C}}_{k+1}\) with the imaginary axis; i.e.

$$\begin{aligned} \gamma _k=\left\{ \left( \ell _k(1-t) + \ell _{k+1}t\right) e^{2\pi \textrm{i} t}: 0\le t\le 1\right\} . \end{aligned}$$

With the help of (6.3), we prove that

$$\begin{aligned} \textsf{Var}\left[ \frac{1}{2\pi \textrm{i}}\int _{R\Gamma }V_\Lambda (z) \, {\textrm{d}z}\right] \gtrsim R^{2-\varepsilon } \end{aligned}$$

as well. To see why this is so, we first recall that for the Ginibre point process, the (radial) two-point function \(k_\Lambda (t)\) is given by

Fig. 4
figure 4

Left: The sequence of seashell domains (shaded), each component is enclosed between a circle, one revolution of the spiral, and a vertical line segment. Right: A single seashell domain \(D_k\), with the arcs \(\gamma _k^\prime \), \(\gamma _k^\perp \) and \({\mathcal {C}}_k\) indicated

$$\begin{aligned} k_\Lambda (t) = -\pi ^{-2} e^{-\pi t^2}\,, \end{aligned}$$

see Sect. 2.4, Part I. Integration by parts yields

$$\begin{aligned} K(z) = 4 \int _{|z|}^{\infty } \log \Big (\frac{r}{|z|}\Big ) r e^{-\pi r^2} \, {\textrm{d}} r = \frac{2}{\pi } \int _{|z|}^{\infty } e^{-\pi r^2} \, \frac{\textrm{d}r}{r} \,, \end{aligned}$$

which implies that

$$\begin{aligned} {\bar{\partial }} K(z) = \frac{e^{-\pi |z|^2}}{\pi {{\bar{z}}}} \in L^1({{\mathbb {C}}}, \textrm{d}m)\,. \end{aligned}$$

We let \(\gamma '_k\) denote the short vertical segment \(\gamma '_k {\mathop {=}\limits ^{\textrm{def}}} \{\textrm{i} t: \ell _{k+1}\le t\le \ell _k\}\). Then we obtain a closed curve by taking the union \({\mathcal {C}}_k\cup \gamma _k^\perp \cup \gamma '_k\) (traversing \({\mathcal {C}}_k\) with its original orientation, and where \(\gamma _k^\perp \) is \(\gamma _k\) traversed backwards), which encloses a domain \(D_k\); cf. Fig. 4. All the domains \(D_k\) are disjoint, and we denote their union by D (the “seashell”).

By Green’s formula, we have for each \(k\ge 0\) that

$$\begin{aligned} \int _{R({\mathcal {C}}_k\cup \gamma _k^\perp \cup \gamma '_k)}K(|x-y|) \, \textrm{d}y=2\textrm{i}\int _{R D_k} {\bar{\partial }}_y K(|x-y|) \, \textrm{d}m(y), \end{aligned}$$

so that

$$\begin{aligned} \int _{R\gamma _k}K(|x-y|) \, \textrm{d}y&=-\int _{R\gamma _k^\perp }K(|x-y|) \, \textrm{d}y \\ {}&= \int _{R({\mathcal {C}}_k\cup \gamma _k')}K(|x-y|) \, \textrm{d}y -2\textrm{i}\int _{R D_k}{\bar{\partial }}_y K(|x-y|) \, \textrm{d}m(y). \end{aligned}$$

Adding up the contributions for \(k\ge 0\), we obtain

$$\begin{aligned} \int _{R\Gamma }K(|x-y|) \, \textrm{d}y&=\int _{R({\mathcal {C}}\cup i[0,\ell _1])}K(|x-y|) \, \textrm{d}y - 2\textrm{i}\int _{R D}{\bar{\partial }}_y K(|x-y|) \, \textrm{d}m(y) \\ {}&=\int _{R{\mathcal {C}}}K(|x-y|) \, \textrm{d}y + O(1), \end{aligned}$$

where we have used that fact that \({\bar{\partial }} K\in L^1({{\mathbb {C}}},\textrm{d}m)\). Integrating over \(R\Gamma \), we find that

$$\begin{aligned} \iint _{R\Gamma \times R\Gamma }K(|x-y|) \, \textrm{d}y\textrm{d}{\bar{x}}&=\int _{R{\mathcal {C}}}\left[ \int _{R\Gamma }K(|x-y|) \, \textrm{d}{\bar{x}}\right] \textrm{d}y+O(R) \\ {}&=\iint _{R\Gamma \times R{\mathcal {C}}}K(|x-y|) \, \textrm{d}y\textrm{d}{\bar{x}}+O(R). \end{aligned}$$

Repeating the above argument, we finally arrive at

$$\begin{aligned} \iint _{R\Gamma \times R\Gamma }K(|x-y|) \, \textrm{d}y\textrm{d}{\bar{x}} =\iint _{R{\mathcal {C}}\times R{\mathcal {C}}}K(|x-y|) \, \textrm{d}y\textrm{d}{\bar{x}}+O(R), \end{aligned}$$

from which the claim follows.