An elementary approach to Gaussian multiplicative chaos

A completely elementary and self-contained proof of convergence of Gaussian multiplicative chaos is given. The argument shows further that the limiting random measure is nontrivial in the entire subcritical phase $(\gamma<\sqrt{2d})$ and that the limit is universal (i.e., the limiting measure is independent of the regularisation of the underlying field)


Introduction
Gaussian multiplicative chaos is a theory initiated by Kahane [9], whose goal (in a slightly updated language) is the definition and study of random measures in domains of the form where h is a centred Gaussian generalised random field subject to certain assumptions, γ > 0, and σ is a given reference measure. Since h is not defined pointwise but exists only as a distribution, it is not clear what meaning to give to (1.1) a priori. In fact, some regularisation of the field and a suitable renormalisation have to be performed in order to construct µ. The theory has generated considerable renewed interest notably because of its connection with two-dimensional Liouville Quantum Gravity and the KPZ relations. This is the particular case of the theory when d = 2 and h is the massless Gaussian free field or GFF (see [13]) with appropriate boundary conditions. The paper by Duplantier and Sheffield [3] constructed the volume measure µ in this particular case using arguments restriced to the case of the GFF such as the domain Markov property and obtained a version of the KPZ relations. Simultaneously, Rhodes and Vargas, [10], among other things, showed that Gaussian multiplicative chaos can be used directly to construct the same object. They also gave a simpler and more general proof of a slightly stronger version of the KPZ relation. We refer the reader to [5] for an excellent survey of relatively recent mathematical developments and general background in this area. Kahane's original work assumes that the covariance kernel K of h is σ-positive, meaning that K(x, y) may be written as the pointwise sum K(x, y) = k K k (x, y) where the summands are nonnegative symmetric definite continuous functions and, crucially, are pointwise nonnegative. Under this assumption (which is somewhat restrictive as it is hard to check in practice), he was able to show that a truncation of h associated with the σ-positive decomposition of K gives rise to a well-defined measure µ as in (1.1) and characterised the values of γ for which it is nontrivial for a given reference measure σ. He also studied fine properties of the resulting random measure µ and showed that its law does not depend on the decomposition of the σ-positive kernel K into positive summands.
Much more recently, Robert and Vargas [11] (motivated by applications to three-dimensional turbulence) obtained a significant generalisation of this theory. They were able to show that, without assuming σ-positivity, regularising the field with a general mollifier function θ gives rise to a sequence of measures µ ε such that µ ε (S) converges in law and the law of the limit does not depend on the regularising function θ. Even more recently, Shamov [12] showed in a very general setting that convergence holds in probability and the limit does not depend on the regularisation. In particular the measure µ is measurable with respect to h. (Conversely, h is measurable with respect to µ, at least in the case of the two-dimensional GFF by a result of [2]). This was also the subject of a recent preprint by Junnila and Saksman [8] whose results, remarkably, cover the critical case.
The purpose of this short note is to provide an elementary and completely self-contained proof of Kahane's theory together with some of the important developments above. Eventually we are able to reprove the convergence in probability (and in L 1 ) result and show nontriviality in the entire subcritical phase (γ < √ 2d), together with the universality result showing uniqueness of the limit (independence with respect to the regularisation function θ). While the setup is slightly less general than Shamov [12], we feel that the result and its proof are nevertheless interesting because of the completely elementary nature of the arguments, and the fact that they cover the most interesting cases without significant assumptions on the covariance kernel K (in particular, no σ-positivity assumption is made). Assumptions: Let h be a Gaussian generalised function on a domain D ⊂ R k . Let σ be a Radon measure on D of dimension at least d ≤ k, i.e., for all ε > 0. We assume that the covariance of h satisfies where g is smooth overD ×D, meaning that for any smooth test function f supported in D, Note that this covers the case of a Gaussian free field in two dimensions with say Dirichlet boundary conditions (but also the case of free boundary conditions, by changing if necessary γ into 2γ). Assume without loss of generality that D contains the ball of radius 10 around the origin and let S be the unit cube (0, 1) k . Let θ : R k → [0, 1] be a fixed smooth function of compact support and such that θ(x)dx = 1. Set θ ε (x) to be an approximation of identity based on θ, i.e. θ ε (x) = ε −k θ(x/ε). We then consider an ε-regularisation of the field h by setting (1.4) Remark 1.1. In certain situations one may wish to relax the assumption of smoothness on θ. In fact this is not used at any point in the proof, except to guarantee that X ⋆ θ ε is well defined as a random variable. If this fact is known for other reasons (e.g. if h is the two-dimensional GFF and θ is the uniform distribution on the unit circle, in which case h ε is simply the usual circle average), then the theorems and arguments below go through without any change. Let In the case of 2d Liouville quantum gravity, the natural choice for σ is often σ(dz) = R(z, D) γ 2 /2 where R(z, D) denotes the conformal radius of the point z in D. However, the case where σ is the occupation measure of an (independent) complex Brownian motion is also of interest as it is can be used for defining the Liouville Brownian motion [6,1], the canonical diffusion process in Liouville quantum gravity. This is an example where σ is singular with respect to Lebesgue measure but is nevertheless of dimension two in the sense of (1.2).
Then µ ε (S) converges in probability and in L 1 to a limit µ(S). Furthermore µ(S) does not depend on the choice of the regularising kernel θ subject to the above assumptions.

Main idea
It is well known and relatively easy to see that for γ sufficiently small (namely γ < √ d), the multiplicative measure µ ε are uniformly integrable: indeed the quantity µ ε (S) is then bounded in L 2 , hence any limit must be nontrivial.
Therefore difficulties mainly arise in the phase where γ ∈ [ √ d, √ 2d). (In Liouville Quantum Gravity, this is the phase of principal interest as this is precisely the measures which are thought to arise as scaling limits of FK-weighted planar maps). The main idea for this work is the following very elementary observation. Any limiting measure µ must be supported on the so-called γ-thick points of the field h: that is, on points x such that Such points were studied in detail in the case of the two-dimensional Gaussian Free Field by [7] but a related notion was already apparent in the early work of Kahane [9] who pointed to its importance. That any limiting measure would have to be supported by γ-thick points is apparent from the definition of µ ε and Girsanov's lemma, which implies that when biasing the law of the field by a factor proportional to e γhε(x) , the mean value of h ε (x) is shifted from 0 to γ Var(h ε (x)) = γ log(1/ε) + O(1).
Therefore, one can pick α > γ and call a point x bad if its thickness is greater than α. We then consider the normalised measure e γhε(x) dx, but restricted to good points. As it turns out, the L 1 contribution of bad points is easily shown to be negligible (essentially by the above Girsanov observation), while the remaining part is shown to remain bounded in L 2 . We will see that a suitable definition of good points (i.e., not too thick) allows one to make the relevant L 2 computation very simple.
Convergence is then shown to be a consequence of the L 2 boundedness (roughly, the good part is a Cauchy sequence in L 2 ). Uniqueness comes from the fact that once uniform integrability is established it also follows for another martingale approximation of the measure arising from the Karhunen-Loeve expansion of h. We then show that the two measures must agree, thereby deducing uniqueness.

Uniform Integrability
The goal of this section will be to prove: Proof. Let α > 0 be fixed (it will be chosen > γ and very close to γ soon). We define a good event with ε 0 ≤ 1 for instance. This is the good event that the point x is never too thick up to scale ε.
Lemma 3.2 (Ordinary points are not thick). For any α > 0, we have that uniformly over x ∈ S, P(G α ε (x)) ≥ 1 − p(ε 0 ) where the function p may depend on α and for a fixed α > γ, p(ε 0 ) → 0 as ε 0 → 0. Proof. Set X t = h ε (x) for ε = − log t. Then a direct computation from (1.3) shows that Cov(X s , X t ) is a smooth function of s and t, and moreover, where the implicit constant is uniform. In particular Var(X t ) = t + O(1).
Proof. Note that By Girsanov's lemma, underP, the process X s has the same covariance structure as under P and its mean is now γ Cov(X s , X t ) = γs + O(1) for s ≤ t. Hencẽ by Lemma 3.2 since α > γ.
We therefore see that points which are more than γ-thick do not contribute significantly to I ε in expectation and can therefore be safely removed. We therefore fix α > γ and introduce: with G ε (x) = G α ε (x). We will show that J ε is uniformly integrable from which the result follows. = S×S e γ 2 Cov(hε(x),hε(y))P (G ε (x), G ε (y))σ(dx)σ(dy) whereP is a new probability measure obtained by the Radon-Nikodyn derivative dP dP = eh ε(x)+hε(y) E(eh ε(x)+hε(y) ) .
Observe that for |x − y| ≤ 3ε say, by Cauchy-Schwarz and (3.1), Cov(h ε (x), h ε (y)) ≤ log(1/ε) + g(x, x) + o(1). On the other hand, for |x − y| ≥ 3ε, Also, if |x − y| ≤ ε 0 /3 (else we bound this probability by one), we havẽ Furthermore, by Girsanov, underP we have that h ′ ε (x) has the same variance as before (therefore log 1/ε ′ + O(1)) and a mean given by To see why (3.5) holds, observe that The first term gives us γ log(1/ε ′ ) by (3.1). For the second term, we only consider the case |x−y| ≥ ε and ε ′ = |x − y|/3 (as the desired bound in the case |x − y| ≤ ε follows directly from Cauchy-Schwarz). In that case, note that for every w ∈ B ε ′ (x) and every z ∈ B ε (y), we have that From this (3.5) follows. HenceP We deduce (We will get a better approximation in the next section). Clearly this is bounded if and since α can be chosen arbitrarily close to γ this is possible if This proves the lemma.
To finish the proof of Proposition 3.1, observe that Lemma 3.4, and for a fixed ε 0 , J ε is bounded in L 2 (uniformly in ε). Hence I ε is uniformly integrable.

Convergence
As before, since E(J ′ ε ) can be made arbitrarily small by choosing ε 0 sufficiently small, it suffices to show that J ε converges in probability and in L 1 . In fact we will show that it converges in L 2 , from which convergence will follow. To do this we will show that (J ε ) forms a Cauchy sequence in L 2 . To approach this question, we write Our basic approach is thus to estimate better than before E(J 2 ε ) from above and E(J ε J δ ) from below. Essentially, the idea is that for x, y which are at a small but macroscopic distance, we can identify the limiting distribution of (h ε ′ (x), h ε ′ (y)) ε ′ ≤ε 0 under the distribution P biased by eh ε(x)+hδ (y) . On the other hand when x, y are closer than that we know from the previous section that the contribution is negligible.
Proof. In fact, the proof is almost exactly the same as in Lemma 4.1, except thatP is now weighted by eh ε(x)+hδ (y) instead of eh ε(x)+hε(y) . But this changes nothing to the argument leading up to (4.4) and hence (4.5) still holds. Since we get a lower bound by restricting ourselves to |x − y| ≥ η, we deduce immediately that Since η is arbitrary, the result follows.
Proof of convergence in Theorem 1.2. Using (4.1) together with Lemmas 4.1, 4.2, we see that J ε is a Cauchy sequence in L 2 for any ε 0 > 0. Combining with Lemma 3.4, it therefore follows that I ε is a Cauchy sequence in L 1 and hence converges in L 1 (and also in probability) to a limit I = µ(S). Remark 4.3. Note that lim ε→0 E(J 2 ε ) depends on the regularisation θ, even though, as we will see next, lim ε→0 I ε does not.

Uniqueness of the limit
For the proof of independence of the limit with respect to the regularising kernel θ we may assume without loss of generality that D is bounded.
Lemma 5.1. We may write h = ∞ i=0 h i where the h i are independent continuous Gaussian fields, in the sense that for an arbitrary fixed function f ∈ L 2 (D, dx), n (h n , f ) converges almost surely and the limit agrees with (h, f ) almost surely.
Proof. This is basically the Karhunen-Loeve expansion of h (see [14]). Since h is only a generalised function we do it carefully. By restricting to a smaller domain D ′ we may assume without loss of generality that g(x, y) is smooth onD. Introduce the Fredholm integral operator Note that T is well defined on L 2 (D) and maps L 2 (D) into continuous functions onD by Lebesgue's dominated convergence theorem. Since D is bounded, we deduce that T : L 2 (D) → L 2 (D). Note further that since K(x, y) = K(y, x) by assumption, T is symmetric with respect to the (Lebesgue) inner product on L 2 (D). Observe also that since K(x, y) ∈ L 2 (D × D), we have that T is a compact operator on L 2 (D) (follows from equicontinuity and Arzela-Ascoli). By the spectral theorem for compact symmetric operators, we deduce that there exists an orthonormal basis of L 2 (D) consisting of eigenfunctions {f k } k≥0 of T (Theorem 7 in Appendix D.5 of [4]). Let λ k denote the corresponding eigenvalue. We have that λ k ≥ 0 and λ k → 0 as k → ∞ by the same theorem. Observe that trivially f k must be continuous since T f k = λ k f k and f k ∈ L 2 so T f k is continuous.
We observe that we can write Indeed for an arbitrary f ∈ L 2 , which for a fixed x identifies K(x, ·) as an L 2 function. Now consider our field h. Observe that ξ k = (h, f k )/ √ λ k must be i.i.d. standard Gaussian random variables. (Formally, this is the same as saying that h( , which is an a.s. continuous function and in L 2 (D). Observe that for an arbitrary test function f , n k=0 (h k , f ) defines a martingale in the filtration generated by (ξ 1 , . . .) and has a variance bounded by ∞ k=0 λ k f k (x)f k (y) = Var((h, f )). Hence the martingale converges a.s. and in L 2 (P). Moreover the same calculation shows that in fact (h, f ) − N k=0 h k f k converges to 0 a.s. in L 2 (P). Hence the limit of the martingale agrees with (h, f ). Now define h n (z) = n k=0 h k (z) and set µ n (S) = S exp γh n (z) − γ 2 2 Var(h n (z)) dz.
Then µ n (S) is a positive martingale with respect to the filtration F n = σ(ξ 1 , . . . , ξ n ) so converges to a limit which we will call µ ′ (S).
Lemma 5.2 (Uniform integrability of µ ε implies uniform integrability of µ n ). The sequence of random variables (µ n (S)) n≥0 is also uniformly integrable.
Proof. Observe that E(µ ε (S)|F n ) = µ n ε (S) where µ n ε is defined as µ n except with h n replaced by its regularisation h n ε = h n ⋆ θ ε . [This follows from writing h = h n + X where X is independent from h n .] When n is finite and ε → 0 there is no problem in making the right hand side converge to µ n (S) by continuity of h n . Hence the left hand side also converges to some limit as ε → 0, and we have µ n (S) = lim ε→0 E(µ ε (S)|F n ). (5.1) Recall that I ε = µ ε (S) is uniformly integrable by Proposition 3. and E(φ(I ε )) ≤ C is bounded. Hence (by continuity of φ and Jensen's inequality), φ(µ n (S)) = lim ε→0 φ(E(I ε |F n )) ≤ lim inf ε→0 E(φ(I ε )|F n ).
Proof of Theorem 1.2. It suffices to show that µ(S) = µ ′ (S). This will show that µ(S) does not depend on the regularisation kernel. From (5.1) and Fatou's lemma, we see that µ n (S) ≥ E(µ(S)|F n ).