Stochastic complex Ginzburg-Landau equation with space-time white noise

We study the stochastic cubic complex Ginzburg-Landau equation with complex-valued space-time white noise on the three dimensional torus. This nonlinear equation is so singular that it can only be under- stood in a renormalized sense. In the first half of this paper we prove local well-posedness of this equation in the framework of regularity structure theory. In the latter half we prove local well-posedness in the framework of paracontrolled distribution theory.

There are also many papers on its stochastic version, the CGL with a noise term ([BS04a, BS04b, KS04, Oda06, PG11, Yan04] to name but a few). In these preceding works, however, the noise is either non-white or multiplicative. Except when the space dimension d = 1 in [Hai02], the stochastic cubic CGL with additive space-time white noise has not been solved.
The difficulty in the case d ≥ 2 is as follows. Since space-time white noise is so rough, a solution u t (x) = u(t, x) would be a Schwartz distribution in x, not a function, even if it existed. Consequently, the cubic nonlinear term |u t | 2 u t does not make sense in the usual way. For this reason, well-definedness of the equation itself was unclear and the cubic CGL with space-time white noise was considered too singular when d ≥ 2.
However, two new theories emerged recently, which can deal with quite singular stochastic PDEs of this kind. One is regularity structure theory [Hai14] and the other is paracontrolled distribution theory [GIP15]. They are both descendants of rough path theory and their deterministic part looks somewhat similar to the counterpart in rough path theory at least in spirit. However, their probabilistic part is more complicated than the counterpart in rough path theory since non-trivial renormalization of the noise has to be done. (There is another theory based on the theory of renormalization groups [Kup16], which will not be discussed in this paper, however.) Although they are clearly different theories, examples of stochastic PDEs they can deal with are very similar. A partial list of singular stochastic PDEs which have been solved (locally in time) by these theories is as follows: Parabolic Anderson Model (d = 2, 3) [GIP15,Hai14,BBF15], KPZ equation and its variants (d = 1) [FH14,GP17,Hos16,FH17], the dynamic Φ 4 3 -model (d = 3) [Hai14,CC13], Navier-Stokes equation with space-time white noise (d = 3) [ZZ15], FitzHugh-Nagumo equation with space-time white noise (d = 3) [BK16].
The main objective of this paper is to prove local well-posedness of the stochastic cubic complex Ginzburg-Landau equation on the three-dimensional torus T 3 = (R/Z) 3 of the following form by using these two theories: where δ denotes the Dirac delta function.
We replace ξ by smeared noise ξ ǫ with a parameter 0 < ǫ < 1 so that ξ ǫ → ξ as ǫ ↓ 0 in an appropriate topology and consider a renormalized equation where C ǫ is a suitably chosen complex constant (specified later) which diverges as ǫ ↓ 0. We show that the solution to (1.2) converges to some process in an appropriate topology. To this end, we use the theory of regularity structure by Hairer [Hai14] and the theory of paracontrolled distributions by Gubinelli-Imkeller-Perkowski [GIP15]. In the two main results (Theorems 2.1 and 4.1), we use different approximations of ξ. However, we can choose the same approximation ξ ǫ in both theories. See Remark 4.2. Consequently, we can see that the solutions obtained in these two theories "essentially coincide", even though the idea behind these theories are quite different. (It should be noted, however, that we do not have a rigorous proof of the exact coincidence of the two solutions. To prove it, a further investigation of the renormalization constants is needed, which could be an interesting future task. ) We now make a comment on the space dimension. When d ≥ 4, CGL (1.1) is not subcritical in the sense of [Hai14] and therefore the equation cannot be solved (or does not even make sense) by any existing method. Though we do not give a proof in this paper, we believe that the case d = 2 is actually much easier than our case d = 3. This paper is organized as follows. In Sections 2 and 3, following [Hai14], we apply the theory of regularity structures to the stochastic CGL (1.1). At the beginning of Section 2 we first present our main result (Theorem 2.1) in a precise form. Then we construct a regularity structure for (1.1) and prove local-wellposedness of (1.1) in a deterministic way. Section 3 is devoted to the probabilistic step, in particular, the renormalization procedure.
In Sections 4 and 5, we apply the paracontrolled calculus to (1.1). In Section 4, we precisely present our main result (Theorem 4.1) and deterministically solve (1.1) locally in time in a similar way to [MW16]. We prove the probabilistic part in Section 5 using a new method developped in [GP17].
Section A is an appendix, in which we recall the definition of complex multiple Itô-Wiener integrals. The product formula for them is frequently used in Sections 3 and 5.
Notations: We use the following notations: For two functions f and g, we write f g if there exists a positive constant C such that f (x) ≤ Cg(x) for any x. We write f (x) ≈ g(x) if both f (x) g(x) and g(x) f (x) hold. To indicate the argument x of a function f , we use both symbols f (x) and f x .

CGL by the theory of regularity structures
In this and the next sections, we study CGL equation by the theory of regularity structures. We begin by presenting the main result in Theorem 2.1 below.
Theorem 2.1. Let η ∈ (− 2 3 , − 1 2 ). Then for every u 0 ∈ C η , the sequence {u ǫ } converges to a limit u in probability as ǫ ↓ 0. Precisely speaking, this means that there exists an a.s. strictly positive random time T depending on u 0 and ξ, such that u and u ǫ for every ǫ > 0 belong to the space C([0, T ], C η ) and we have u ǫ − u C([0,T ],C η ) → 0 in probability. Furthermore, u is independent of the choice of ρ.
• For ϕ ∈ C(R 4 , C) and δ > 0, we define the space-time scaling around z = (t, x 1 , x 2 , x 3 ) ∈ R 4 by We define the parabolic Hölder-Besov space C α s on R 4 for α ∈ R. At this stage, we do not impose periodicity for elements of C α s .
• For α > 0, we denote by C α s the space of complex-valued functions ϕ on R 4 such that holds locally in z, z ′ ∈ R 4 and for every k with |k| s < α.
• Denote by C 0 s = L ∞ loc (R 4 , C) the space of locally bounded functions.
• For r > 0, let B r be the set of complex-valued smooth functions ϕ on R 4 supported in the ball B s (0, 1) = {z; z s ≤ 1} and such that their derivatives of order up to r are bounded by 1. Let α < 0 and r = ⌈−α⌉.

Results on regularity structures
First we recall basic concepts from the theory of regularity structures [Hai14].
Definition 2.2. We say that a triplet T = (A, T, G) is a regularity structure with index set A, model space T and structure group G, if • A is a locally finite set of real numbers bounded from below and 0 ∈ A.
• G is a subgroup of L(T ), the set of continuous linear operators on T , such that, for every Γ ∈ G, α ∈ A, and τ ∈ T α , Furthermore, Γ1 = 1 for every Γ ∈ G.
Definition 2.3. Let T be a regularity structure. We say that a subspace V = β∈A V β with V β ⊂ T β is a sector of regularity α ≤ 0 if V is invariant under G (i.e. ΓV ⊂ V for every Γ ∈ G) and α is the minimal index such that V α = {0}. A sector with regularity 0 is called function-like.
Remark 2.7 ([Hai14, Lemma 6.7]). The reconstruction operator R is local in the sense that, the behavior of Rf on the compact set K ⊂ R 4 is uniquely determined by the values of f and Π in an arbitrary neighborhood of K.
Next we introduce specific symbols and operators to describe (1.1) by regularity structure: the polynomial structure, product, integration against Green's function, and the complex conjugate.
We have the regularity structure T poly given by all polynomials in the symbols X 0 , X 1 , X 2 , X 3 , which denote the time and space directions, respectively. Denote X k = 3 i=0 X ki i for a multi-index k ∈ Z 4 + , and 1 = X (0,0,0,0) . We endow these with the parabolic degrees |X k | s = |k| s . Now we define the model space T poly = n∈Z+ T poly n , where T poly n = X k ; |k| s = n .
The group G = R 4 acts on T poly by defining Γ h X k = (X − h1) k for every h ∈ R 4 . Now we have the regularity structure T poly = (Z + , T poly , R 4 ). Furthermore, we have the canonial model (Π, Γ) on T poly given by Throughout this section, the regularity structure T = (A, T, G) contains T poly , i.e. T poly is contained as a sector and the restriction of G on T poly coincides with {Γ h ; h ∈ R 4 }. The model (Π, Γ) acts on T poly by (2.3). Furthermore, we assume that T n = T poly n for every n ∈ Z + . Proposition 2.8 ([Hai14, Proposition 3.28]). Let V be a function-like sector which contains T poly and such that V ⊂ T poly + T + α for some α > 0, where Then for every f ∈ D γ,η P (V ), Rf coincides with the component of f in V 0 = 1 and belongs to C α s ((0, ∞) × R 3 ), the space of functions ϕ such that the estimate (2.1) holds uniformly over z, z ′ ∈ K for every compact set K ⊂ (0, ∞) × R 3 .
For a pair of sectors (V, W ), a product * : V × W → T is a continuous bilinear map such that The canonical product on T poly is given by X k * X l = X k+l .
We say that a function K : R 4 \ {0} → C is a regularizing kernel (of order 2) if it can be written by K = n≥0 K n , where {K n } satisfies the following assumptions.
• K n : R 4 → C is smooth and supported in a ball B s (0, 2 −n ).
• There exists r > 0 such that R 4 K n (z)z k dz = 0 for every n ≥ 0 and k with |k| s ≤ r.
For a sector V , an abstract integration map I : V → T is a continuous linear map such that • IV α ⊂ T α+2 for every α ∈ A such that α + 2 ∈ A, • Iτ = 0 for every τ ∈ V ∩ T poly , Given a sector V and an abstract integration map I, we say that a model (Π, Γ) realizes a regularizing kernel K for I, if for every α ∈ A, τ ∈ V α and z ∈ R 4 we have It is a consequence of Assumption 2.10 that (∂ k K * Π z τ )(z) is defined for all k with |k| s < α + 2.
For such V , the set V = {τ ; τ ∈ V } is also a sector. We assume that a model (Π, Γ) is compatible with the complex conjugate, i.e.
Π z τ = Π z τ for every z ∈ R 4 and τ ∈ T . Then we can see that the map D γ,η P (V ) ∋ f → f ∈ D γ,η P is continuous antilinear, and Rf = Rf holds.

Regularity structures associated with CGL and admissible models
For smooth ξ, the CGL equation (1.1) is equivalent to the mild form where * denotes the space-time convolution, G is the fundamental solution of with initial condition G(0, ·) = δ 0 , extended into the function G : R 4 \ {0} → C by G(t, ·) ≡ 0 if t ≤ 0, and Gu 0 denotes the solution of (2.5) with initial condition u 0 .
We construct a regularity structure associated with (2.4) by following [Hai14, Section 8.1]. We assumed that polynomials {X k } are contained in our regularity structure. Additionally we have symbols Ξ (noise), an abstract integration I (space-time convolution with G), and the complex conjugate. Inspired from (2.4), we can defineF as the smallest set of symbols such that {Ξ, X k } ⊂F and closed for the operations: • If τ, τ ′ ∈F , then τ τ ′ = τ ′ τ ∈F .
• If τ ∈F , then τ ∈F , where we set X k = X k .
For a fixed number α < − 5 2 , we define the homogeneity of each variable by However,F is too big. Precisely we consider the subsets U and W, which are defined by the smallest sets such that {Ξ, X k } ⊂ W, {X k } ⊂ U and τ ∈ W ⇒ Iτ ∈ U, τ 1 , τ 2 , τ 3 ∈ U ⇒ τ 1 τ 2 τ 3 ∈ W.
We set F = U ∪ W, and define We can see that T contains all polynomials T poly , and furthermore, the abstract integration I, the complex conjugate, and the product U × U × U → W are well defined. Here U = τ ; τ ∈ U .
In order to define T as a model space of a regularity structure, the set {|τ | s ; τ ∈ F } must be bounded from below. A nonlinear SPDE is called subcritical, if the nonlinear terms formally disappear in some scaling which keeps the linear part and the noise term invariant. This is equivalent to the property that all symbols except Ξ defined as above have homogeneities strictly greater than |Ξ| s ([Hai14, Assumption 8.3]). In the present case, this is equivalent to |(IΞ) 2 IΞ| s = 3(2 + α) > α, or α > −3.
We need to define the structure group acting on T . Let T + be the complex free commutative algebra generated by abstract symbols We define the homogeneity of each variable by In the following, we will view J k as a map from T to T + , by defining J k τ = 0 if τ = X k or |τ | s + 2 − |k| s ≤ 0, and linearly extending it for all τ ∈ T .
We construct two linear maps ∆ and ∆ + recursively as follows. The linear map ∆ : T → T ⊗ T + is defined by The linear map ∆ + : T + → T + ⊗ T + is defined by Then by [Hai14,Theorem 8.16], the pair (T + , ∆ + ) is a Hopf algebra, i.e. ∆ + satisfies the identity (Id T + ⊗ ∆ + )∆ + = (∆ + ⊗ Id T + )∆ + and the algebra homomorphism 1 * : T + → T + defined by 1 * (1) = 1 and 1 * (τ ) = 0 for τ ∈ F + \ {1} is a counit in the sense that and furthermore, the algebra homomorphism A : T + → T + recursively defined by The pair (T, ∆) is a comodule over T + , i.e. ∆ satisfies the identity We denote by G the set of algebra homomorphisms g : T + → C such that g(τ ) = g(τ ) for every τ ∈ T + . Then G is a group with the product • defined by The inverse of g ∈ G is given by g −1 = gA. Each g ∈ G acts on T as the operator Γ g ∈ L(T ) defined by The following theorem is a modification of [Hai14,Theorem 8.24].
Theorem 2.14. Let α ∈ (−3, − 5 2 ) and A = {|τ | s ; τ ∈ F }. Then T cgl := (A, T, G) is a regularity structure which contains the polynomial structure T poly and has the complex conjugate on U , the abstract integration map I : W → T , and the products * : U × U → U U and * : We introduce a class of suitable models associated with T cgl . Let K be a regularizing kernel satisfying Assumption 2.10 with r > 0. We denote by T (r) cgl the regularity structure obtained by T γ = 0 for γ > r.
We assume that the model is periodic in the space direction. For n ∈ Z 3 and z = (t, x) ∈ R 4 , we write S n z = (t, x + n).
Definition 2.16. We say that a model (Π, Γ) on T for every z, z ′ ∈ R 4 and n ∈ Z 3 .
Lemma 2.17 ([Hai14, Lemma 7.7]). There exist a regularizing kernel K and a smooth function R with compact support such that holds for every periodic function u supported in R + × R 3 and z ∈ (−∞, 1] × R 3 . Furthermore, K and R are supported in R + × R 3 , and K satisfies Assumption 2.10 with arbitrary fixed r > 0.
For a periodic distribution ξ ∈ S ′ , we define the modelled distribution Now we reformulate (2.4) as a fixed point problem in D γ,η P . First, for every periodic initial condition u 0 ∈ C η with η < 0 and η / ∈ Z, the function Gu 0 is canonically lifted to an element of D γ,η P for every γ > η, by defining (Gu 0 )(z) = |k|s<γ 1 k! X k ∂ k Gu 0 (z) ([Hai14, Lemma 7.5]). Second, note that by Proposition 2.9, the map u → u 2 u is locally Lipschitz continuous from D γ,η P (U ) to D γ+2α+4,3η P , if γ > |2α + 4| and η ≤ α + 2. Therefore we can consider the problem in u ∈ D γ,η P . However, F (u) takes values in the sector U of regularity α = |Ξ| s < − 5 2 < −2, so that Theorem 2.6 is not sufficient to define RF (u). In order to overcome this problem, we impose the following assumption on the distribution for all compact sets K ⊂ R 4 . We assume that ξ = ΠΞ belongs toC α s for α = |Ξ| s .
Furthermore, the solution u and the survival time T depend on (u 0 , Z, ξ, K * ξ) locally uniformly continuously and locally uniformly lower semi-continuously, respectively, in the topology of C η × M ×C α s × C(R, C α+2 ).
To glue local solutions up to maximal time where the solution exists, note that Ru belongs to the space C((0, T ), C η ), even though η < 0. Indeed, the solution can be written by u = IΞ + u + , where u + takes values in the function-like sector U + . As in Proposition 2.8, Ru + is Hölder continuous. By Assumption 2.18-(2), RIΞ = K * ξ belongs to C(R, C η ). For s ∈ (0, T ), we start from u s ∈ C η and consider the problem u = G γ (1 t>s (Ξ + F + (u))) + Gu s , which is well-posed by defining R(1 t>s Ξ) := 1 t>s ξ. This can extend the time interval where the local solution exists, following [Hai14,Proposition 7.11]. The existence of maximal solution and its continuity with respect to (u 0 , Z, ξ, K * ξ) are obtained by standard arguments in PDE theory.

Renormalization
For each ǫ > 0, the noise ξ ǫ defined in the beginning of Section 2 can be lifted to an admissible and periodic model Z ǫ = (Π ǫ , Γ ǫ ) on T (r) cgl , by defining the linear map Π ǫ : T → C ∞ (R 4 ) with the additional assumptions: Furthermore, Z ǫ has the property that Π z τ is a smooth function for every τ ∈ T and z ∈ R 4 , then as a consequence, R ǫ f is also smooth and satisfies for every modelled distribution f ([Hai14, Remark 3.15]). We introduce a renormalization of Z ǫ following [Hai14, Section 8.3]. Let F 0 ⊂ F be a subset such that {τ ∈ F ; |τ | s ≤ 0} ⊂ F 0 , and there exists a subset F * ⊂ F 0 such that ∆F 0 ⊂ F 0 ⊗ T + 0 , where T + 0 is the complex free commutative algebra generated by symbols Let M : F 0 → F 0 be a linear map such that Then two linear mapsM : Theorem 2.20 ([Hai14, Theorem 8.44]). Consider F 0 and M as above. Assume that for every τ ∈ F 0 andτ ∈ T + 0 we can write Then for every admissible model (Π, f ) on T Now we give a renormalization map M in a concrete form. In order to simplify notations, we introduce a graphical notation for the element in F . First, we draw a circle to represent Ξ. For an element Iτ , we draw a downward black line starting at the root of τ . For a product τ τ ′ , we joint these trees at their roots. The complex conjugate τ is denoted by changing the color black and white to each other. For example, Then we can list all of elements with negative homogeneities as follows: Homogeneity Symbol α Ξ 3(α + 2) 2(α + 2) , 5α + 12 , α + 2 , 4α + 10 , , , , , , 2α + 5 X i , X i (i = 1, 2, 3) 0 1 Since α > − 18 7 , the element has positive homogeneity 7α + 18 > 0, so that it does not appear here.
Considering chaos expansions of Gaussian models as in Section 2.5, we can define the renormalization map M = M (C 1 , C 2,1 , C 2,2 ) by for some constants C 1 , C 2,1 and C 2,2 . Since M must be closed in the space F 0 , we should choose F 0 by Then it turns out that we can take F * = { , , , , }. From now on, the subscript i of X i runs over {1, 2, 3}.
Lemma 2.21. The linear map M satisfies the conditions of Theorem 2.20. Furthermore, the identity holds for every τ ∈ F 0 and z ∈ R 4 .
Proof. Calculations ofM , ∆ M and∆ M are completely parallel to those in [Hai14, Section 9.2], so here we show only the results. Indeed we havê (Here and in what follows, summation symbols over the repeated index i are omitted.) Therefore, M satisfies the conditions of Theorem 2.20. The relation (2.7) is obtained by ) and the fact that Π z X i τ (z) = 0 for every τ with X i τ ∈ F .
Proposition 2.22. Let Z ǫ = (Π ǫ , Γ ǫ ) be a model canonically lifted from a continuous function ξ ǫ . Let S : (u 0 , Z) → u be the solution map given by Theorem 2.19. Given constants C 1 , C 2,1 and C 2,2 , denote byẐ ǫ = (Π ǫ ,Γ ǫ ) the renormalized model given by Theorem 2.20. Then for every periodic u 0 ∈ C η , u ǫ = RS(u 0 ,Ẑ ǫ ) solves the equation Proof. Since the fixed point problem (2.6) can be written by u = IF (u) + · · · , where · · · takes values in T poly , we can find functions ϕ and {ϕ i } 3 i=1 such that the solution u ∈ D γ,η P of (2.6) with γ = 1 + (greater than but sufficiently close to 1) can be written by On the other hand, by Proposition 2.11,û ǫ = R M u satisfies the equation Hence it suffices to show that R M F (u) coincides with the driving terms of (2.8).
Since R M u = RM u follows from (2.7), we have This completes the proof.

Convergence of Gaussian models
Our goal is to show the following renormalization result. We give its proof in the next section since it takes long.
Proposition 2.23. If we choose C ǫ 1 , C ǫ 2,1 and C ǫ 2,2 as in (3.4), then there exists a random modelẐ independent of the choice of ρ, and for every θ ∈ (0, − 5 2 − α), γ < r, p > 1 and every compact set K ⊂ R 4 , we have the bounds Furthermore, for every T > 0 we have Combining Propositions 2.22 and 2.23, we obtain Theorem 2.1 if we choose 3 Proof of convergence of renormalized models In this section, we give a proof of Proposition 2.23. Since the estimates (2.10) and (2.11) are obtained in [Hai14, Proposition 9.5], we focus on the estimate (2.9). By [Hai14, Theorem 10.7], it suffices to show that there exist κ, θ > 0 such that, for every τ ∈ F with |τ | s < 0, every test function ϕ ∈ B r and z ∈ R 4 , there exists a random variable Π z τ, ϕ such that We fix z ∈ R 4 throughout this section. The estimates in this section are uniform over z.
This section is organized as follows. In Section 3.1, we recall the Wiener chaos decomposition of the random variableΠ z τ and introduce graphical notations to describe its kernel. In Section 3.2, we give some useful estimates to prove (3.1). In Section 3.3, we show the required estimate (3.1) for each symbol τ . In Section 3.4, we show the explicit forms of renormalization constants and their divergence orders.

Wiener chaos decomposition
The driving noise ξ is space-time white noise on R × T 3 , which is extended periodically to R 4 . In precise, we are given the complex multiple Wiener integral J p,q on (E, m) = (R × T 3 , dtdx 1 dx 2 dx 3 ) (see Section A) and a random distribution ξ is defined by ξ, ϕ = J 1,0 (πϕ), where ϕ is a compactly supported smooth function and πϕ = n∈Z 3 S n ϕ is its periodic extension, where We note that z ∈ R 4 is fixed. We write several kernels by combining these notations. For example,

Estimates of singularity of kernels
From the scaling property of K = n K n , we can see that |K(z)| z −3 s . It is useful to consider the singularity of kernels like this. The notation For α, β ≥ 0, we use the notation It suffices to show that R is bounded. By the inequality z ′ −α

Ξ, , , , ,
For τ = Ξ, , , the required estimates follow from [Hai14, Proposition 9.5]. We now treat τ = , , . By definition, By applying the product formula to If we choose C ǫ 1 = z ′ = |K ǫ (z)| 2 dz, the required estimates (2.9) for τ = , easily follow. Indeed, Then the estimate (2.9) for τ = easily follows as above. Indeed, In the subsequent computations, the estimates ofΠ ǫ z τ −Π z τ are obtained by similar arguments to those ofΠ ǫ z τ as above by using the bound of K ǫ − K, so we show only the uniform boundedness but not the convergence estimates explicitly. For detailed proofs, see [Hos16, Section 4.8].

X
and 0 > 2|X i | s = 2(2α + 5). The case τ = X i is similar. Now we turn to , , , . In particular, we consider the renormalizations of and , since the corresponding chaos decompositions of the two other elements do not have zeroth order terms. By definition, We note that By applying the product formula (Theorem A.1) we havê Hence if we choose we have the required bounds. Indeed, since kernels belonging to the same order chaos have the same graphs except for the difference of K and K, it suffices to show the bounds for one of these kernels for each order chaos. For remaining zeroth order terms, we have the bounds for an arbitrary small κ > 0. For the second order terms, by Lemma 3.3 we have Similarly, for the fourth order terms, we have for small κ > 0. As a consequence, we have for an arbitrary small κ > 0. The cases τ = , , are similar.

, , , ,
We treat the case τ = . The cases τ = , are similar. By definition, The summation symbol over i = 1, 2, 3 is omitted again. For the fourth order term, by Lemma 3.3 we have For the second order term, we decompose it as By Schwarz's inequality, it suffices to consider the bound for each term. For the first term, we have for small κ > 0. For the second term, by Lemma 3.2 we have for small κ > 0. As a consequence, we have Finally we treat τ = . The other one is similar. By definition, For the fifth order term, we have For the third order terms, we note that the required bounds are obtained by multiplying z ′ z ′′ to (3.3), so we have the bound For the first order terms, we need to introduce the renormalization For the remaining term, we have As a consequence, we have
Proof. For (1), in the decomposition we see that the last three terms are bounded by z 5−α−β s by using [Hos16, Lemma 4.14]. Hence it suffices to set A * B = A * B.
The assertion (2) is similarly obtained.
Proof of Proposition 3.4. First we show the estimate of C ǫ 1 . Since K, K ∈ S 3 , we have Q ∈ S 1 . Hence we have The last equality follows from the scaling property of Q and the boundedness of Q − . Next we show the estimate of C ǫ 2,1 . Note that Hence it suffices to consider z s >ǫ R(z)dz, where R = KQ 2 ∈ S 5 . However, we replace R by a function S + defined below. Let ϕ be a smooth and nonnegative function such that supp(ϕ) By the scaling property of R, we have the estimate The estimate of C ǫ 2,2 is similar.

CGL by the theory of paracontrolled distributions
In Sections 4 and 5, we study well-posedness of CGL (1.1) by using the paracontrolled distribution theory introduced by [GIP15]. In that paper, they stud-ied some problems such as differential equations driven by fractional Brownian motion, a Burgers-type stochastic PDE, and a nonlinear version of the parabolic Anderson model. After that Catellier-Chouk [CC13] showed local well-posedness of the three-dimensional stochastic quantization equation (the dynamic Φ 4 3 model), which is an R-valued version of CGL. Our proof of the local well-posedness of CGL consists of two parts: a deterministic and a probabilistic part.
In Section 4, we deal with a deterministic version of CGL and construct a solution map from a space of driving vectors to a space of solutions. We also see that the solution map is continuous. In this section, ξ is a deterministic distribution which takes values in the Hölder-Besov space C − 5 2 −κ for any κ > 0 small enough. To construct the solution map, we rely on the method introduced by Mourrat-Weber [MW16]. We state the precise assertion concerning the wellposedness in Theorem 4.27. In Theorem 4.30, we see that the solution obtained in Theorem 4.27 solves the renormalized equation (1.2) in the usual sense.
Section 5 is the probabilistic part and devoted to constructing a driving vector X associated to the space-time white noise ξ defined on R × T 3 . We follow the approach as in [GP17] and obtain the driving vector in Theorem 5.9. Here we explain how to mollify the white noise ξ. Let χ be a smooth realvalued function defined on R 3 such that (1) supp χ ⊂ B(0, 1), where B(x, r) denotes the open ball of radius r > 0 and center x ∈ R 3 , (2) χ(0) = 1. We set χ ǫ (k) = χ(ǫk) for every k ∈ Z 3 . Define e k (x) = e 2πik·x for every k ∈ Z 3 and x ∈ T 3 . Here, the dot · denotes the usual inner product. We define ξ ǫ by Here, {ξ(k)} k∈Z 3 denotes the Fourier transform of ξ and it has the same law with independent copies of the complex white noise on R. We see that ξ ǫ → ξ in an appropriate topology. For the smeared noise ξ ǫ , we define a family of processes {X ǫ } 0<ǫ<1 . In this definition of X ǫ , we will use the dyadic partition of unity {ρ m } ∞ m=−1 via the resonant and renormalization constants c ǫ 1 , c ǫ 2,1 and c ǫ 2,2 ; see Section 4.1 for the definitions of {ρ m } ∞ m=−1 and and see (5.5) for the renormalization constants. We obtain the driving vector X as a limit of {X ǫ } 0<ǫ<1 . By setting c ǫ = 2(c ǫ 1 − νc ǫ 2,1 − 2νc ǫ 2,2 ), we have |c ǫ | → ∞ as ǫ → 0. By combining Theorems 4.27, 4.30 and 5.9, we obtain the following main theorem in Sections 4 and 5: Theorem 4.1. Let 0 < κ ′ < 1/18 and u 0 ∈ C − 2 3 +κ ′ . Consider the renormalized equation (1.2) with C ǫ = c ǫ . Then, for every 0 < ǫ < 1, there exist a unique process u ǫ and a random time T ǫ * ∈ (0, 1] such that • T ǫ * converges to some a.s. positive random time T * in probability, • u ǫ converges to some process u defined on [0, T * ) × T 3 in the following sense: and χ.
Here, we will make comments on this theorem. Note that the process u ǫ and u are obtained by substituting X ǫ and X into the solution map, respectively. Since X ǫ converges to X and the solution map is continuous, we see that u ǫ converges to u. In addition, u ǫ solves (1.2) in the usual sense, hence we see the theorem. We need to pay attention to the assertion that u is independent of the choice of ξ ǫ . Recall that X ǫ depends on {ρ m } ∞ m=−1 . Hence u ǫ may, too. However, we see that u ǫ does not. In fact, we obtain an expression of the renormalization constant c ǫ which does not depend on {ρ m } ∞ m=−1 in Proposition 5.21. Hence, (1.2) is independent of {ρ m } ∞ m=−1 and so is the solution u ǫ . As a consequence, the limit u is independent of {ρ m } ∞ m=−1 . In addition, the limit u is independent of χ because the driving vector X is independent of χ (Theorem 5.9). Hence we see the solution does not depend on {ρ m } ∞ m=−1 or χ.
Remark 4.2. As stated in Section 1, we can choose common approximation noise for the renormalized equation (1.2) to obtain the solutions in Theorems 2.1 and 4.1. In this sense, the solutions in Theorems 2.1 and 4.1 "essentially coincide," or at least look very similar. In Theorem 4.1, the noise ξ is smeared only in spatial direction. However, we can consider the case that the noise is smeared both in temporal and spatial directions. For a non-negative Schwartz function ̺ on R 4 such that ̺ = 1, we consider the scaling ̺ ǫ (t, x) = ǫ −5 ̺(ǫ −2 t, ǫ −1 x), which is the mollifier considered in Theorem 2.1, and replace ξ by smooth noisẽ

Besov-Hölder spaces and paradifferential calculus
In this section, we introduce the Besov-Hölder spaces and paradifferential calculus. The results in this section can be found in [GIP15,BCD11] or follow from them easily.

Besov spaces
We introduce the Besov spaces and recall their basic properties. Let D ≡ D(T 3 , C) be the space of all smooth C-valued functions on T 3 and D ′ its dual of D. We set e k (x) = e 2πik·x for every k ∈ Z 3 and x ∈ T 3 . The Fourier transform F f for f ∈ D is defined by F f (k) = T 3 e −k (x)f (x) dx and its inverse F −1 g for a rapidly decreasing sequence {g(k)} k∈Z 3 is defined by F −1 g(x) = k∈Z 3 g(k)e k (x). For every rapidly decreasing smooth function φ, We denote by {ρ m } ∞ m=−1 a dyadic partition of unity, that is, it satisfies the following: . We are ready to define Besov space C α = C α (T 3 , C) for α ∈ R. It is defined as the completion of D under the norm The next is frequently used results on Besov spaces: . We have the following: • Let α, α 1 , α 2 ∈ R satisfy α = (1 − θ)α 1 + θα 2 for some 0 < θ < 1. Then, we have

Paraproducts and Commutator estimates
For every f ∈ C α , g ∈ C β , we set The following are properties of paraproduct. (1) For every β ∈ R, f g C β f L ∞ g C β .
Then the product f g is well-defined as an element in C α .

Regularity of C α -valued functions
Here we consider C α -valued functions and introduce several classes of them. Let 0 < δ ≤ 1 and η ≥ 0 and define these classes as follows: • C T C α is the space of all continuous functions from [0, T ] to C α which is equipped with the supremum norm • C δ T C α is the space of all δ-Hölder continuous functions from [0, T ] to C α which is equipped with the seminorm Remark 4.7. We introduced the norms on the spaces E η T C α and E η,δ T C α in order to control explosion at t = 0. The definition of L α,δ T is natural from the time-space scaling of CGL.
We present results on smoothing effects of semigroup {P 1 t } t≥0 .
Then we have We

Definitions of driving vectors and solutions
First of all, we give the definition of a driving vector. We set Let 0 < κ < κ ′ < 1/18 and T > 0. The following is the definition of a driving vector.
Definition 4.11. We call a vector of space-time distributions X = (X , X , X , X , X , X , X , X , X , X , X , X , X , X ) which satisfies I(X 0 , X ) = X and I(X 0 , X ) = X a driving vector of CGL. We denote by X κ T the set of all driving vectors. We define the norm · X κ T by the sum of the norm of each component.
Note that we assume that the component X has Hölder continuity and it belongs to L 1 2 −κ, 1 4 − 1 2 κ T . We easily see that the space X κ T is a closed set of the product Banach spaces. Next we define the space of solutions and give the notion of a solution.
We describe the tree-like symbols , , , ,. . . in the definition. The dot and the line denote the white noise and the operation I, respectively. Hence, represents I(ξ) = Z. The symbols and stand for the complex conjugate of Z and the product ZZ, respectively. So means I(Z 2 Z). Finally, denotes the resonance term of I(Z 2 Z) and Z.
Definition 4.12. We set Next, we fix X ∈ X κ T and set Z = X and W = X . Define F and G on Here G 1 (v, w), . . . , G 8 (v, w) will be defined shortly.
Since Z t ∈ C − 1 2 −κ and W t ∈ C 1 2 −κ , the product W t Z t and W t Z t are not defined a priori. We define them by The products W 2 Z and W W Z are also defined by It follows from Proposition 4.4 that (W Z) t , In order to define G 6 (v, w), we use com(v, w) defined as follows. For every v 0 ∈ C − 2 3 +κ ′ and (v, w) ∈ D κ,κ ′ T , we set From Lemma 4.23, we see that com(v, w) X and com(v, w) X are welldefined. Roughly speaking, com(v, w) t is something like We are in a position to define G 1 ,. . . ,G 8 . We write u 2 = v + w and set − 2νR(u 2 , X , X ) − νR(u 2 , X , X ) , for every (v 0 , w 0 ) ∈ C − 2 3 +κ ′ × C − 1 2 −2κ . We will use Proposition 4.9 to check that the map is well-defined map from D κ,κ ′ T to itself and has good property.
Definition 4.13. For every (v 0 , w 0 ) ∈ C − 2 3 +κ ′ × C − 1 2 −2κ and X ∈ X κ T , we We interpret (4.10) as a fixed point problem M : (v, w) → (M 1 (v, w), M 2 (v, w)). We show that the map M is well-defined and a contraction in Section 4.3. Section 4.4 is devoted to the construction and the uniqueness of the solution. We show that the solution to (1.1) satisfies a renormalized equation in Section 4.5.
In that section, we see the validity of the notion of the solution to CGL.
Before starting our discussion, we will remark on the function spaces we have just introduced.

Properties of M 1
Let us start our discussion with F .
where C is a positive constant depending only on κ, κ ′ , µ and ν.
Combining this with X t ∈ C −1−κ and using Proposition 4.4, we see Φ t X t ∈ C −1−κ and The term Φ t X t also has a similar bound. From the defintion of F (v, w), we see the assertion.
Next we show that M 1 is Lipschitz continuous.
Proposition 4.19. For any (v (1) , w (1) ), (v (2) , w (2) ) ∈ D κ,κ ′ T , we have Here, C 3 and C 4 are positive constants depending only on κ, κ ′ , µ, ν, In particular, they are given by at most first-order poly- Proof. The assertion follows from Lemma 4.18 and the fact that By a similar argument to the proof of Proposition 4.17, we see the conclusion.

Properties of M 2
Here, we consider properties of M 2 . Let 0 < T ≤ 1. We fix X ∈ X κ 1 and write Z = X and W = X . We denote by δ st the difference operator, that is, First of all, we study com(v, w) defined by (4.7). Let v 0 ∈ C − 2 3 +κ ′ and (v, w) ∈ D κ,κ ′ T . For notational simplicity, we set Φ 1 = −2ν(−νW + v + w), Remark 4.20. The implicit constants which will appear in Lemma 4.21, 4.22 and 4.23 depend only on κ, κ ′ , µ, ν and X X κ 1 . In particular, the constants are given by an at most first-order polynomials in X X κ 1 . Lemma 4.21. For every 0 < t ≤ T , we have the following: (1) We have (2) We have U t ∈ C 1+κ ′ and Proof. We show the first assertion. For (Φ, Ψ) = (Φ 1 , Ψ 1 ), Proposition 4.10 implies Substituting t 0 P 1 t−s Ψ s ds = X t − P 1 t X 0 to the first term in the above, we see Since a similar equality holds for (Φ, Ψ) = (Φ 2 , Ψ 2 ), we have verified the first assertion.
Lastly, we estimate the third term. We consider the contribution of W , v and w separably. In the proof, we use Proposition 4.10. Note We also use Ψ s C −1−κ 1. For W , we see For v and w, we have The proof is completed.
Proof. We estimate each term in the upper bound of com(v, w)(t) C 1+κ ′ in Lemma 4.22 by using Remark 4.15. The first three terms are estimated as follows: To estimate other terms, we use the fact that the inequality holds for 0 < θ 1 , θ 2 < 1 and t > 0. From Remark 4.15, we see Combining them, we see the estimate of com(v, w)(t) C 1+κ ′ .
Lemma 4.24. For any (v, w) ∈ D κ,κ ′ T and 0 < t ≤ T , we have Here, C is a positive constant depending only on κ, κ ′ , µ, ν and X X κ 1 and it is given by a third-order polynomial in X X κ 1 .
The estimates of G 4 (v, w)(t) and G 5 (v, w)(t) are obtained easily. The terms which admits the lowest regularity in the defintions of G 4 (v, w)(t) are W t X and W t X and their regularity is − 1 2 −2κ. Therefore we obtain 1. From Proposition 4.6, we see From the definition of G 6 (v, w)(t), we have In the last line, we used Lemma 4.23. For τ = , , we see In these estimates, we used Remark 4.15 . We obtain The proof is completed.
Here, C 1 and C 2 are positive constants depending only on κ, κ ′ , µ, ν and X X κ 1 . They are given by at most third-order polynomials in X X κ 1 .
Proof. Recall (4.9). It follows from Proposition 4.9 that Combining these, we have shown the assertion.
We also have local Lipschitz continuity of M 2 .

Proposition 4.26. For any
Here, C 3 and C 4 are positive constants depending only on κ, κ ′ , µ, ν, X (i) X κ 1 and (v (i) , w (i) ) D κ,κ ′ T . In particular, they are given by at most second-order Proof. We can show the assertion by a similar way as Proposition 4.25.

Local existence and uniqueness
We show local well-posedness of CGL (1.1). This is the most important theorem in this section.
We show the last assertion. From (4.13) and (4.14), we see that T * continuously depends on the initial condition (v 0 , w 0 ) and the driving vector X. Since C 2 depends on the driving vector X continuously, M * is a continuous map from (v 0 , w 0 ) and X. From this fact and the continuity of C X X κ 1 , we see the continuity of T * . Hence we have T (n) * → T * . Without loss of generality, for fixed t < T * , we assume that T (n) * > t for every n. From (4.15) and the continuity of M * with respect to (v 0 , w 0 ) and X, we see sup n (v (n) , w (n) ) D κ,κ ′ T < ∞. From this fact and (4.12), we can choose C ′ 3 and C ′ 4 such that and C ′ 4 . Iterating this argument, we have the convergence in [0, t]. The proof is completed.
The lower semi-continuity of T sur follows from the continuity of T * . Let For any fixed t < T sur , we can construct a unique solution in [0, t] by gluing finite number of local solutions as above. In this procedure, each of length of time interval converges, so that the solution (v (n) , w (n) ) exists in [0, t] for sufficiently large n. This implies t < lim inf n→∞ T we can start from (v, w)(T sur − δ) ∈ C − 2 3 +κ ′ × C − 1 2 −2κ for small δ > 0 and construct a solution on [0, T * ], where T * is uniform over δ. This implies that for sufficiently small δ > 0, we can construct a solution on [T sur − δ 2 , T sur + δ 2 ] without explosion at the starting time. This is a contradiction, so we obtain the existence and uniqueness up to survival time with respect to the weaker norms.

Renormalized equation
In this subsection, we show that a solution in the sense of Theorem 4.27 to the equation with a driving vector constructed from a driving force ξ ∈ C T C β for β > −2 and renormalization constants solves the renormalized equation.
We fix complex constants c 1 , c 2,1 and c 2,2 and define functions X τ as in Table 1 for every graphical symbols τ and construct the driving vector X = (X , . . . , X ). The Y/N in the Driver column in Table 1 indicates whether the term X τ is included in the definition of a driving vector or not. The term X τ with Driver column N is going to be used to define other terms. For the definition of I( * , •), see (4.3). Note that we can interpret the product in Table 1 in the usual sense because X is a C-valued continuous function by the assumption ξ ∈ C T C β for β > −2. The number in Regularity column denotes the exponent α τ of the Hölder-Besov space C ατ where the term X τ lives in. Precisely, α τ means α τ − κ for any κ > 0 small enough.
For the rest of this proof, we prove (4.19) and (4.20). To show (4.19), we use the definition of X and Proposition 4.6. From them, we see Applying these identities and the definitions of X and X , we obtain We use the similar argument to obtain Combining them, we see (4.19). From the definition of com(v, w), we obtain (4.20). The proof is completed.

Proof of convergence of driving vectors
This section is a probabilistic part of proof of Theorem 4.1. In this section, we construct a driving vector X ∈ X κ T associated to the white noise ξ (Theorem 5.9). After that we derive the expression of renormalization constants c ǫ 1 , c ǫ 2,1 and c ǫ 2,2 used in the construction of X (Proposition 5.21) and obtain the divergence rate of them (Proposition 5.22). First of all, we define Ornstein-Uhlenbeck like process Z = Z(t, x), which is a seed of the driving vector. The process Z is defined as a stationary solution to the following equation: The solution has a formal expression Here, I is defined by (4.4). Since Z is a distribution-valued process, we cannot define processes such as Z 2 and Z 2 Z a priori. To define such processes, we consider an approximation {Z ǫ } 0<ǫ<1 of Z and define Z 2 and Z 2 Z as renormalized limits of (Z ǫ ) 2 and Z 2 Z in an appreciate topology, respectively. To this end, we recall the smeared noise {ξ ǫ } 0<ǫ<1 defined by (4.1) approximates the white noise ξ. Using the approximation, we define We recall that the Fourier transform {ξ(k)} k∈Z 3 of ξ has the same law of the white noise associated to (E, B, dm). Here, E = R × Z 3 , B is the product σ-field of B(R) and 2 Z 3 and dm = dsdk, where ds and dk are the Lebesgue measure on R and the counting measure Z 3 , respectively. Note that dm is given by We denote by B * the set of all elements A ∈ B such that m(A) < ∞. Let (A ∩ B), we can define complex multiple Itô-Wiener integrals J p,q to calculate (Z ǫ ) 2 and (Z ǫ ) 2 Z ǫ ; see Section A. By using them, we show their convergence after renormalization and construct the driving vector X.
Throughout this section, we use the notations in Section A and the following: • We use m = (s, k), n = (t, l), µ = (σ, k) and ν = (τ, l) to denote a generic element in E.

Convergence criteria
In this subsection, we establish convergence criteria of Itô-Wiener integrals.

C α -valued random variables
We want to define a random field of the form Here L ∞ p,q is the space of the essentially bounded measurable functions defined on E p+q . Assume now that p,q for every φ ∈ D and define the family of random variables If there exists a D ′ -valued random variableX such that X , φ = X(φ), then we writeX(x) = J p,q (f (x)). Now we define X j (x) = X((F −1 ρ j )(x − ·)). If X = j≥−1 X j converges in D ′ , it satisfies X, φ = X(φ) for every φ ∈ D, so we can write X(x) = J p,q (f (x)).
Proposition 5.1. Let α ∈ R and p ∈ (1, ∞). If Proof. Since f, which implies that the support of F X j (k) contained in an annulus. Hence we can apply [BCD11, Lemma 2.69] to X. By a similar argument as [Hos17a, Lemma 5.3], we see the assertion. (There, the following well-known property of Gaussian measures are used: on each fixed inhomogeneous Wiener chaos, all the L p -norms, 1 < p < ∞, are equivalent.)

Definitions of driving vectors
Since Z is a distribution-valued process, we cannot define a process such as Z 2 , ZZ and Z 2 Z a priori. To define such processes, we consider an approximation {Z ǫ } 0<ǫ<1 of Z and define Z 2 , ZZ and Z 2 Z as renormalized limits of (Z ǫ ) 2 , Z ǫ Z ǫ and (Z ǫ ) 2 Z ǫ .

Ornstein-Uhlenbeck like process and its approximations
We give an expression of Z ǫ defined by (5.1) in terms of Itô-Wiener integral.
The constants c ǫ 1 , c ǫ 2,1 and c ǫ 2,2 look dependent on (t, x) and the dyadic partition {ρ m } ∞ m=−1 of unity. However, we will show that they are not in Proposition 5.21.
In the above, we regard H τ t as a function with respect to n 1,...,q and m 1,...,p for p = 0 and q = 0, respectively. In particular, H τ t is a constant for p = q = 0. We use the same convention for Q τ 0 .
Proof. The assertion follows from Proposition A.1.
Remark 5.10. The limit process X in Theorem 5.9 is given explicitly by generalized Itô-Wiener integrals. Since the expression of kernels are independent of χ, so is X.
The proof of this theorem will be given in the next section.

Proof of convergence of driving vectors
In this section, we show the convergence X ǫ,τ → X τ for all τ . As stated above, they have the good kernels. Hence, it is sufficient to estimate Q τ 0 and Q τ 0 − Q ǫ,τ 0 , due to Proposition 5.4.

Useful estimates
In order to estimate Q τ 0 and Q τ 0 −Q ǫ,τ 0 , we use the following lemmas many times.
Proof of Proposition 5.18. We focus on τ = .
Since the estimate (5.13) follows easily from (5.10), we show (5.14) and (5.15) for the rest of this subsection.
The assertion is verified. We can show the assertion for 1/(α 2 + iβ 2 ) in the same way.

A Complex multiple Itô-Wiener integral
We recall some notations and properties of complex multiple Wiener integrals from [Itô52]. A complex random variable Z is called isotropic complex normal if ℜZ and ℑZ are independent, has the same law with mean 0 and (ℜZ, ℑZ) is jointly normal. A system of complex random variables {Z λ } is called jointly isotropic complex normal if n i=1 c i Z λi is isotropic complex normal for any n, any c 1 , . . . , c n ∈ C, and any indices λ 1 , . . . , λ n . Note that the isotropic complex normal {Z λ } satisfies E[Z λ Z µ ] = 0 = E[Z µ Z ν ]. The distribution of jointly isotropic complex normal system {Z λ } is uniquely determined by the positivedefinite matrix V λµ = E[Z λ Z µ ] ([Itô52, Theorem 2.3]).
Let (E, B, m) be a σ-finite, atomless measure space, and B * be the set of all elements A ∈ B such that m(A) < ∞. Then there exists a jointly isotropic com- This functional has the property E[|J p,q (f )| 2 ] ≤ p!q! f 2 L 2 p,q . We defined the Itô-Wiener integral for non-symmetric functions, hence, we cannot expect the equality in this inequality. Since S p,q is dense in L 2 p,q , the integral J p,q is uniquely extended into continuous linear map from L 2 p,q to L 2 (P ). We set L 2 0,0 = C and J 0,0 (c) = c. From [Itô52, Theorem 7], we have E[|J p,q (f )| 2 ] ≤ p!q! f L 2 p,q , E[J p,q (f )J r,s (g)] = 0 for (p, q) = (r, s).