Cylindrical continuous martingales and stochastic integration in infinite dimensions

In this paper we define a new type of quadratic variation for cylindrical continuous local martingales on an infinite dimensional spaces. It is shown that a large class of cylindrical continuous local martingales has such a quadratic variation. For this new class of cylindrical continuous local martingales we develop a stochastic integration theory for operator valued processes under the condition that the range space is a UMD Banach space. We obtain two-sided estimates for the stochastic integral in terms of the $\gamma$-norm. In the scalar or Hilbert case this reduces to the Burkholder-Davis-Gundy inequalities. An application to a class of stochastic evolution equations is given at the end of the paper.


Introduction
Cylindrical local martingales play an important role in the theory of stochastic PDEs. where the a.s. limit is taken over partitions 0 = t 0 < . . . < t N = t. The definition (1.1) can be given for any Banach space X, but for technical reasons we will assume that X * is separable. The definition (1.1) is new even in the Hilbert space setting. Our motivation for introducing this class comes from stochastic integration theory and in this case M is a cylindrical continuous local martingale on a Hilbert space. A more detailed discussion on stochastic integration theory will be given in the second half of the introduction. Previous attempts to define quadratic variation are usually given in the case M is actually a martingale (instead of a cylindrical martingale) and in the case X is a Hilbert space (see [14,60,49,50]). We will give a detailed comparison with the earlier attempts to define the quadratic variation in Section 3.
To study SPDEs with a space-time noise one often models the noise as a cylindrical local martingale on an infinite dimensional space. We refer the reader to [13] for the case of cylindrical Brownian motion. In order to study SPDEs one uses a theory of stochastic integration for operator-valued processes Φ : R + × Ω → L(H, X). Our aim is to develop a stochastic integration theory where the integrator M is from M loc var (H) and the integrand takes values in L(H, X), where X is a Banach space which has the UMD property.
The history of stochastic integration in Banach spaces has an interesting history which goes back 40 years. Important contributions have been given in several papers and monographs (see [4,7,9,31,57,58,59,70] and references therein). We refer to [56] for a detailed discussion on the history of the subject. Influenced by results from Garling [24] and McConell [48], a stochastic integration theory for Φ : [0, T ] × Ω → L(H, X) with integrator W H was developed in [53] by van Neerven and Weis and the first author. The theory is naturally restricted to the class of Banach spaces X with the UMD property (e.g. L q with q ∈ (1, ∞)). The main result is that Φ is stochastically integrable with respect to an H-cylindrical Brownian motion if and only if Φ ∈ γ(0, T ; H, X) a.s. Here γ(0, T ; H, X) is a generalized space of square functions as introduced in the influential paper [36] (see Subsection 4.1 for the definition). Furthermore, it was shown that for EJP 21 (2016), paper 59. Φ dW H X L p (Ω) ≤ C Φ L p (Ω;γ(0,T ;H,X)) , which can be seen as an analogue of the classical Burkholder-Davis-Gundy inequalities. This estimate is strong enough to obtain sharp regularity results for stochastic PDEs (see [55]) which can be used for instance to extend some of the sharp estimates of Krylov [40] to an L p (L q )-setting.
The aim of our paper is to build a calculus for the newly introduced class of cylindrical continuous local martingales which admit a quadratic variation. Moreover, if M is a cylindrical continuous local martingale on a Hilbert space H, we show that there is a natural analogue of the stochastic integration theory of [53] where the integrator Burkholder-Davis-Gundy inequalities hold again. We will derive several consequence of the integration theory among which a version Itô's formula.
We finish this introduction with some related contributions to the theory of stochastic integration in an infinite dimensional setting. In [49] Métivier and Pellaumail developed an integration theory for cylindrical martingales which are not necessarily continuous and two-sided estimates are derived in a Hilbert space setting. A theory for SDEs and SPDEs with semimartingales in Hilbert spaces is developed by Gyöngy and Krylov in [26,27,25]. The integration theory with respect to cylindrical Lévy processes in Hilbert cases and its application to SPDEs is developed in the monograph by Peszat and Zabczyk [64]. Some extensions in the Banach space setting have been considered and we refer to [1,2,45,69,68] and references therein. In [16] Dirksen has found an analogue of the two-sided Burkholder-Davis-Gundy inequalities in the case the integrator is a Poisson process and X = L q (also see [17,46,47]). By the results of our paper and the previously mentioned results, it is a natural question what structure of a cylindrical noncontinuous local martingales is required to build a theory which allows to have two-sided estimates for stochastic integrals. Outline: • In Section 2 some preliminaries are discussed.
• In Section 3 the quadratic variation of a cylindrical continuous local martingale is introduced.
• In Section 4 the stochastic integrable Φ are characterized.
• In Section 5 the results are applied to study a class of stochastic evolution equations.
√ jf j * is scalarly measurable. So, f 1/2 = √ jf j * j * −1 x is measurable for all x ∈ ran j * , and because of the boundedness of an operator √ jf j * j * −1 and the density of ran j * in X one has that f 1/2 x is measurable for all x ∈ X.

Supremum of measures
In the main text we will often need explicit descriptions of the supremum of measures. The results are elementary, but we could not find suitable references in the literature.
All positive measures are assumed to take values in [0, ∞] (see [5,Definition 1.6.1]). In other words, a positive measure of a set could be infinite. Lemma 2.6. Let (µ α ) α∈Λ be positive measures on a measurable space (S, Σ). Then there exists the smallest measureμ s.t.μ ≥ µ α ∀α ∈ Λ. Moreover, (2.3) where the first supremum is taken over all the partitions A = N n=1 A n of A. From now on we will write sup α∈Λ µ α =μ, whereμ is as in the above lemma. A Cylindrical continuous martingales Let A 1 , . . . , A N ∈ Σ be disjoint and such that B = N n=1 A n . Let B nk = A n ∩ B k . Then by the σ-subadditive of the µ α , we find Taking the supremum over all A n , we find ν(B) ≤ k≥1 ν(B k ).
To prove the additivity let B, C ∈ Σ be disjoint. By the previous step it remains to show that ν(B) + ν(C) ≤ ν(B ∪ C). Fix ε > 0 and choose A 1 , . . . , A N ∈ Σ disjoint, α 1 , . . . , α N ∈ Λ and 1 ≤ M < N such that Finally, we check that ν =μ. In order to check this letν be a measure such that µ α ≤ν for all α. Then for each A ∈ Σ we find and hence ν ≤ν. Thus we may conclude that ν =μ. Remark 2.7. If the conditions of Lemma 2.6 are satisfied and there exists a measure µ such that µ α ≤ µ, thenμ ≤ µ. In particular if µ is finite, thenμ is finite as well.  Proof. Since f is the supremum of countably many measurable functions, it is measurable. Since A → A f dν defines a measure which dominates all measures µ f , the estimate "≤" in (2.4) follows.
Proof. Let A ∈ Σ. Without loss of generality suppose thatμ(A) < ∞. Fix ε > 0. According to (2.3) there exists K > 0, a partition A = K k=1 A k of A into pairwise disjoint sets and an increasing sequence (n k ) 1≤k≤K ⊆ N such that Remark 2.10. Assume that in the situation above S = R, Σ is a Borel σ-algebra. Definē µ on segments as follows:μ (2.5) where the first supremum is taken over all the partitions (a, b] = N n=1 A n of the segments (a, b] into pairwise disjoint segments. Then by Carathéodory's extension theoremμ extends to a measure on the Borel σ-algebra. Obviouslyμ ≥ µ α for each α ∈ Λ (because (μ − µ α )((a, b]) ≥ 0 for every segment (a, b], and so, by [5, Corollaries 1.5.8 and 1.5.9] for every Borel set) andμ ≤μ. Consequently,μ =μ. Notice that the segments in the partition (a, b] = N n=1 A n can be chosen with rational endpoints (of course except a and b). Then the supremum obtained in (2.5) will be the same.

Cylindrical continuous martingales and quadratic variation
In this section we assume that X is a Banach space with a separable dual space X * . Let (Ω, F, P) be a complete probability space with filtration F := (F t ) t∈R+ that satisfies the usual conditions, and let F := σ( t≥0 F t ). We denote the predictable σ-algebra by P.
In this section we introduce a class of cylindrical continuous local martingales on a Banach space X which have a certain quadratic variation. We will show that it extends several previous definitions from the literature even in the Hilbert space setting.

Definitions
A scalar-valued process M is called a continuous local martingale if there exists a sequence of stopping times (τ n ) n≥1 such that τ n ↑ ∞ almost surely as n → ∞ and 1 τn>0 M τn is a continuous martingale.
Let M (resp. M loc ) be the class of continuous (local) martingales. On M loc define the translation invariant metric given by Here and in the sequel we identify indistinguishable processes. One may check that this coincides with the ucp topology (uniform convergence compact sets in probability). The following characterization will be used frequently.  The space M loc is a complete metric space. This is folklore, but we include a proof for convenience of the reader. Let (M n ) n≥1 be a Cauchy sequence in M loc with respect to the ucp topology. By completeness of the ucp topology we obtain an adapted limit M with continuous paths. It remains to shows that M is a continuous local martingale.
By taking an appropriate subsequence without loss of generality we can suppose that M n → M a.s. uniformly on compacts sets. Define a sequence of stopping times (τ k ) ∞ k=1 as follows: otherwise.
Since each (M n ) τ k is a bounded local martingale with continuous paths, (M n ) τ k is a martingales as well by the dominated convergence theorem. Letting n → ∞, it follows again by dominated convergence theorem that M τ k is a martingale. Therefore, M is a continuous local martingale with a localizing sequence (τ k ) ∞ k=1 .
Let X be a Banach space. A continuous linear mapping M : X * → M loc is called a cylindrical continuous local martingale. (Details on cylindrical martingales can be found in [3,34]). For a cylindrical continuous local martingale M and a stopping time τ we can define M τ : In this way M τ is a cylindrical continuous (local) martingale again. Two cylindrical continuous local martingales M and N are called indistinguishable if ∀x * ∈ X * the local martingales M x * and N x * are indistinguishable.

Remark 3.2.
On M loc it is also natural to consider the Emery topology, see [18] and also [38,3,34]. Because of the continuity of the local martingales we consider, this turns out to be equivalent. We find it therefore more convenient to use the ucp topology instead.

Remark 3.3.
Since X * is separable, we can find linearly independent vectors (e * n ) n≥1 ⊆ X * with linear span F which is dense in X * . For fixed t ≥ 0 and almost all ω ∈ Ω one can define B t : Ω → B(F, F ) such that B t (x * , y * ) = [M x * , M y * ] t for all x * , y * ∈ F . Unfortunately, one can not guarantee, that t → B t is continuous a.s. Moreover, as we will see in Example 3.26 for X a Hilbert space, it may already happen that for a.a. ω ∈ Ω, for some t > 0, B t / ∈ B(X * , X * ).
In the following definition we introduce a new set of cylindrical martingales for which the above phenomenon does not occur.
Let (Ω, F, P) be a probability space, (S, Σ) be a measure space and let M + (S, Σ) be a set of all positive measures on (S, Σ). For f, g : In Example 3.26 we will see that not every cylindrical continuous local martingale is in M loc var (X).
To prove that M is a cylindrical continuous local martingale, fix t ≥ 0 and a sequence (x * n ) n≥1 such that x * n → 0. Then by (5) 1. M ∈ M loc var (X); 2. For any dense subset (x * n ) n≥1 of the unit ball in X * there exists a nondecreasing right-continuous process F : R + × Ω → R + such that for a.a. ω ∈ Ω we have that µ F (ω) = sup n µ [M x * n ](ω) ; 3. For any dense subset (x * n ) n≥1 of the unit ball in X * there exists a nondecreasing right-continuous process G : R + × Ω → R + such that for a.a. ω ∈ Ω we have that where the outer supremum is taken over all 0 = t 0 < t 1 < . . . < t J < t with t j ∈ Q for j ∈ {0, . . . , J}. The fact that F is increasing is clear from (3.2). The right-continuity of F follows from the fact thatμ is a measure.
. Fix x * ∈ X * of norm 1. Since M is a cylindrical continuous local martingale we can find (n k ) k≥1 such that x * n k → x * and a.
and therefore µ F ≥ µ [M x * ] a.s. We claim that F is a.s. the least function with this property.
s. and hence µ F is the smallest measure with the required property. By the definition of a quadratic variation we find that F = [[M ]] a.s. Finally, note that by (3.2), F is adapted and therefore F is predictable by the a.s. pathwise continuity of F . Remark 3.8. Notice that by the above proposition the quadratic variation of M ∈ M loc var (X) has the following form a.s.
where the limit is taken over all rational partitions 0 = t 0 < . . . < t N = t and (x * m ) m≥1 is a dense subset of the unit ball in X * .
Next we give another characterization of M being in M loc var (X). Theorem 3.9. Let M : X * → M loc . Then M ∈ M loc var (X) if and only if there exists a mapping a M : R + × Ω → B(X * , X * ) such that for every x * , y * ∈ X * , a.s. for all t ≥ 0, a M (t)(x * , y * ) = [M x * , M y * ] t and a.s. for all t ≥ 0, (x * , y * ) → a M (t)(x * , y * ) is bilinear and continuous, and for all t ≥ 0 the following limit exists Here the limit is taken over partitions 0 = t 0 < . . . < t N = t.
If this is the case then G(t) = [[M ]] t a.s. for all t ≥ 0.
Proof. Let M ∈ M loc var (X). Fix a set (x * m ) m≥1 ⊂ X * of linearly independent vectors such that span(x * m ) m≥1 is dense in X * . Let F = (y * n ) ⊂ X * be the Q-span of (x * m ) m≥1 . Then there exists a M : Since by the last inequality a M is bounded on F × F , it can be extended to X * × X * , and by the continuity of M , a M (x * , y * ) is a version of [M x * , M y * ]. To prove (3.4) notice that because of the boundedness of a M and a density argument one replace the supremum over the unit sphere by the supremum over x * ∈ {y * n : y * n ≤ 1}. Then this formula coincides with (3.3), therefore a.s.
To prove the converse first note that for all x * ∈ X * , µ [M x * ] ≤ µ G x * 2 a.s. and hence M is a cylindrical continuous local martingale by Remark 3.1. Since a M is continuous one can replace the supremum by the supremum over a countable dense subset of the unit ball again. Now one can apply Proposition 3.7 and use (2.5).   Then the following assertions hold: Proof. (1): It is obvious from the scalar theory that for every x * ∈ X * with x * ≤ 1, M τ x * is a continuous local martingale. Moreover, (2): To check that M τn ∈ M var (X) it remains to show that 1 τn>0 M τn x * is a martingale.
By the Burkholder-Davis-Gundy inequality [35,Theorem 26.12] and the continuity of Therefore, the martingale property follows from the dominated convergence theorem and the fact that 1 τn>0 M τn x * is a local martingale.
We end this subsection with a simple but important example. • W Q (·)x * is a Brownian motion for all x * ∈ X, • EW Q (t)x * W Q (s)y * = Qx * , y * min{t, s} ∀x * , y * ∈ X * , t, s ≥ 0.
The operator Q is called the covariance operator of W Q . Then W Q ∈ M var (X). Indeed, In the case Q = I is the identity operator on a Hilbert space H, we will call W H = W I an H-cylindrical Brownian motion. In this case [[M ]] t = t. Let Ω 0 ⊂ Ω be a set of a full measure such that G(t) from (3.4) is finite for all t ≥ 0 in Ω 0 . Note that pointwise in Ω 0 for all t ≥ 0,
Proof of Proposition 3.13. Let Ω 0 ⊂ Ω be a set of a full measure such that G(t) from (3.4) is finite for all t ≥ 0 in Ω 0 . Then pointwise on Ω 0 , for all x * , y * ∈ X * , A M x * , y * is absolutely continuous with respect to [[M ]]. Let (e * n ) n≥1 ⊆ X * be a set of linearly independent vectors, such that its linear span F is dense in X * . Then there exists a process Q M : Ω × R + → L(F, X * * ) such that Q M e * n , e * m is predictable for each n, m ≥ 1 as k → ∞. Since the left-hand side is predictable, the right-hand side has a predictable version.
for all m, n ≥ 1. Therefore on Ω 0 , we find that for all m, n ≥ 1, Taking the supremum over all n, m ≥ 1, it follows that Q M ≤ 1 on S. On the complement of S we let Q M = 0. Since F is dense in X * , Q M has a unique continuous extension to a mapping in L(X * , X * * ).

Remark 3.15.
Assume that X * * is also separable (e.g. X is reflexive). In this case it follows from the Pettis measurability theorem that the functions A M x * and Q M x * are strongly progressively measurable for each x * ∈ X * (see e.g. [32]). Moreover, if φ : R + × Ω → X * is strongly progressively measurable, then A M φ and Q M φ are strongly progressively measurable as well.
Remark 3.16. Let H be a separable Hilbert space and X be a separable Banach space.
In [50,60,61] cylindrical continuous martingales are considered for which the quadratic variation operator has the form and hence the identity follows from Lemma 2.8, Remark 2.10, Theorem 3.9 and the separability of X * .

Quadratic variation for local martingales
In this section we will study the case where the cylindrical local martingale actually comes from a local martingale on X. We discuss several examples and compare our definition quadratic variation from Definition 3.4 to other definitions. In order to distinguish EJP 21 (2016), paper 59. between martingales and cylindrical martingales we use the notation M for an X-valued martingale.
For a continuous local martingale M : R + × Ω → X we define the associated cylindrical continuous martingale M : It is a cylindrical continuous local martingale since if (x * n ) n≥1 ⊆ X * vanishes as n → ∞, then for all t ≥ 0 almost all ω Below we explain several situations where one can check that the associated cylindrical continuous local martingale M is an element of M loc var (X). In general this is not true (see Example 3.25).
First we recall some standard notation in the case H is a separable Hilbert space. Let M : R + × Ω → H be a continuous local martingale. Then the quadratic variation is defined by     [ To see that this is well-defined and bounded of norm at most [M ] t , choose partitions with decreasing mesh sizes such that the convergence in (3.7) holds on a set of full measure Ω 0 . Then a polarization argument shows that pointwise on Ω 0 , The operator M t is positive and it follows from (3.8) that for any orthonormal basis Hence a.s. for all t ≥ 0, M t a trace class operator and Tr M t = [M ] t . As in Proposition 3.13 one sees that there is a q M : R + × Ω → L(H) such that for all g, h ∈ H, q M g, h is predictable and a.s.
We claim that M ∈ M loc var (H). As before a.s.
Now we find that almost surely, for all h, g ∈ H and t ≥ 0 Moreover, an approximation argument yields that for all elementary progressive processes φ, ψ :  are not defined at all.
Let X be a Banach space, M : R + × Ω → X be a continuous local martingale. Then we say that M has a scalar quadratic variation (see [14,Definition 4 has a ucp limit as ε → 0. In this case the limit will be denoted by t . Since in the Hilbert space case the above limit coincides with the previously defined quadratic variation, there is no risk of confusion here (see [14,Remark 4 Outside the Hilbert space setting it is not so simple to determine whether the scalar quadratic variation exists. Also note that the definition can not be extended to cylindrical . Indeed, choose a sequence ε n → 0 such that the limit in (3.10) converges uniformly on compact intervals on a set of full measure Ω 0 . Then for With a similar argument one sees that the existence of the tensor quadratic variations of [14] implies (or its tensor quadratic variation) exists in general.

Cylindrical martingales and stochastic integrals
Let X, Y be two Banach spaces, x * ∈ X * , y ∈ Y . We denote by x * ⊗ y ∈ L(X, Y ) the following linear operator: x * ⊗ y : x → x * , x y. Let X be a Banach space. The process Φ : where 0 ≤ t 0 < . . . < t n < ∞, for each n = 1, . . . , N the sets B 1n , . . . , B M n ∈ F tn−1 and vectors h 1 , . . . , h K are orthogonal. For each elementary progressive Φ we define the stochastic integral with respect to M ∈ M loc var (H) as an element of L 0 (Ω; Often we will write Φ · M for the process · 0 Φ(s) dM (s).
One can also prove that in the situation above for each stopping time τ : If the domain of φ is in a fixed finite dimensional subspace H 0 ⊆ H, then (3.13) is an obvious multidimensional corollary of [35,Proposition 17.15]. For general φ it follows from an approximation argument. Indeed, let φ n : s. So, using (3.13) for φ n , (3.12) and Remark 3.1 one obtains (3.13) for general φ. Remark 3.23. It follows from Remark 3.1 that for each finite dimensional subspaces X 0 ⊆ X the definition of the stochastic integral can be extended to all strongly progressively measurable processes Φ : . In order to deduce this result from the one-dimensional case one can approximate Φ by a process which is supported on a finite dimensional subspace of H and use Remark 3.1 together with (3.12) and the fact that X 0 is isomorphic to R d for some d ≥ 1 since it is finite dimensional. The space of stochastic integrable Φ will be characterized in Theorem 4.1.

Proposition 3.24.
Let H be a Hilbert space. Let N ∈ M loc var (H). Let Φ : R + × Ω → L(H, X) be such that for each x * ∈ X * , Φ * x * is progressively measurable and assume that for all (3.14) In this section there are two definitions of a stochastic integral (see (3.11) and (3.14)). One can check that both integrals coincide in the sense that (3.14) would be the cylindrical continuous martingale associated to the one given in (3.11).
Proof. We first show that M is a cylindrical continuous local martingale. Clearly, each M x * is a continuous local martingale. It remains to prove the continuity of x * → M x * EJP 21 (2016), paper 59. in the ucp topology. Fix T > 0. Let Ω 0 be a set of full measure such that for ω on Ω 0 , and hence by the remarks in Subsection 3.1 also M x * n → M x * uniformly on [0, T ] in probability. Since T > 0 was arbitrary, we find that M is a cylindrical continuous local martingale.
To prove the equivalence it suffices to observe that   [0, 1] into pairwise disjoint sets. Let ψ : hence M is a cylindrical continuous local martingale. On the other hand due to (3.15) one concludes that Consequently, M / ∈ M loc var (H).

Quadratic Doléans measure
Recall from Definition 3.10 that µ M is the cylindrical Doléans measure associated with M . Since it only depends on [[M ]] sometimes the information get lost. In the next definition we define a bilinear-valued measure associated to M (see [49,Section 15.3]).
for every predictable rectangle F × (s, t] and for every x * , y * ∈ X * . Note thatμ M defines a vector measure with variation (see [15,76]) given by where the supremum is taken over all the partitions A = N n=1 A n . If |μ M |([0, ∞) × Ω) < ∞, then it is a standard fact that the variation |μ M | defines a measure again and |μ M | µ M (see [15] Proposition 3.28. Assume M is a cylindrical continuous martingale such that M, x * t ∈ L 2 (Ω) for all t ≥ 0. Then the following assertions are equivalent In that case dμ M = Q M dµ M in a weak sense, namely   29. Let (f n ) n≥1 be a sequence of continuous predictable increasing processes on R + . Suppose that for all n ≥ 1 the corresponding Doléans measure µ n of f n exists. Assume also that µ = sup n≥1 µ n is of bounded variation. Then F : (3.20) where the limit is taken over all partitions 0 = t 0 < . . . < t K = t, is a predictable continuous increasing process and its Doléans measure exists and equals µ.
where the limit is taken over all partitions 0 = t 0 < . . . < t K = t. Then F N is a predictable process by Remark 2.10. Moreover, it is continuous since the corresponding Lebesgue-Stieltjes measure is nonatomic by (2.3). Let us consider the corresponding Doléans Since ν N ≥ µ n for each given n ≤ N , we have ν N ≥ sup 1≤n≤N µ n . Also notice that ν N ≤ 1≤n≤N µ n .
It remains to show "≤" in (3.21). First of all by Remark 2.10 a.s. µ F N (ω) = sup 1≤n≤N µ f n (ω). By Lemma 2.8 a.s. the maximum of the Radon-Nikodym derivatives are predictable and continuous. Therefore, the sets are in the predictable σ-algebra P. Redefine these sets to make them disjoint: Clearly this extends to all B ∈ P. Now it follows that for all  By Lemma 2.9, pointwise on R + × Ω, s. Moreover, F is predictable as it is the pointwise limit of the predictable processes F N . By the monotone convergence theorem and (3.23) we find that for all 0 ≤ s < t and A ∈ F s , which completes the proof.
Proof of Proposition 3.28.
(1)⇒(2): Assume (1). Let x * , y * ∈ X * . Then for A = (a, b] × F with b > a ≥ 0 and F ∈ F a , it follows from Proposition 3.13 that . , x * n ) are linear independent for any n ≥ 1. By a standard argument one can construct a Q-bilinear mapping a M : Let (y * n ) n≥1 ⊆ X * be equal to the intersection of E and the unit ball in X * . Then by Definition 3.27 and (3.18) |μ M | = sup n µ M y * n , where µ M x * is the Doléans measure of M x * for a given x * ∈ X * . Now by Lemma 3.29 one derives that there exists a predictable continuous increasing process F : R + × Ω → R such that a.s.
where the limit is taken over all partitions 0 = t 0 < . . . < t K = t. In particular, a M (t)(y * n , y * n ) ≤ F (t) a.s. and hence as in the first part of the proof of Theorem 3.9 one sees that a M (t) extends to a bounded bilinear form on X * × X * a.s. and thanks to Remark 3.1 and the fact that M is a cylindrical continuous local martingale one obtains that for each x * , y * ∈ X * , a M (x * , y * ) and [M x * , M y * ] are indistinguishable. Then and thanks to Theorem 3.9 we conclude that the quadratic variation of M exists.
The final identity |μ M | = µ M follows from Lemma 2.8, (3.19) and the fact that which was proved in Proposition 3.13.

Covariation operators
In this subsection we assume that both X and Y have a separable dual space. In this section we introduce a covariation operator for M 1 ∈ M loc var (X), M 2 ∈ M loc var (Y ) and develop some calculus results for them.
be defined on the same probability space. Then there exists a covariation operator A M1,M2 : R + × Ω → L(X * , Y * * ) such that for each x * ∈ X * , y * ∈ Y * a.s.
To construct such a version we can argue as in the first part of the proof of Theorem 3.9.
Moreover, for M 1 , M 2 ∈ M loc var (X) a.s. for all t ≥ 0 the triangle inequality holds: (3.25) The above metric does not necessarily turn M loc var (X) into a topological vector space in the case X is infinite dimensional. However, if the martingales are assumed to start at zero then it becomes a topological vector space.
Proof. Now for M 1 , M 2 ∈ M loc var (X) one can easily prove, that M 1 + M 2 ∈ M loc var (X).
Indeed, by the definition of the quadratic (co)variation operator and linearity for all x * , y * ∈ X * , t ≥ 0, a.s.
which proves (3.25). Since it is clear that M loc var (X) is closed under multiplication by scalars, it follows that M loc var (X) is a vector space. To prove the completeness let (M n ) n≥1 ⊆ M loc var (X) be a Cauchy sequence, then (M n x * ) n≥1 is a Cauchy sequence in M loc for all x * ∈ X * , and so by Remark 3.1 and completeness it converges to a continuous local martingale M x * in the ucp topology. Let (x * m ) ∞ m=1 ⊂ X * be a dense subset of X * . Then due to a diagonalization argument there exists a subsequence (n k ) k≥1 such that [M n k x * m ] t converges a.s. for any t ≥ 0 and m ≥ 1, and [[M n k ]] t has an a.s. limit for all t ≥ 0 (recall that due to where the limit is taken over all partitions 0 = t 0 < . . . < t L = t. Then by Lemma 2.9 a.s. where ∞ is the space of bounded sequences. Then obviously by (3.25) a.s.
k=1 is a Cauchy sequence in ∞ . Now one can easily show that As a positive definite bilinear form the covariation operator has the following properties a.s. ∀t > s ≥ 0, x * ∈ X * : The limit exists a.s. thanks to the Cauchy-Schwartz inequality and the fact that a.s. for where the last is an easy consequence of (3.26).
The process [[M 1 , M 2 ]] is continuous a.s. and has some properties of a covariation process of real-valued martingales. For instance, one can prove by the formula (3.26) that for all t > s ≥ 0 The proof is analogous to the proof of [66,Theorem II.25], for which one has to apply inequalities of the form (3.26).
Recall from (3.14) that for suitable Φ and M ∈ M loc var (H), (Φ · M ) ∈ M loc var (X) given by Proof. Fix t ≥ 0 and x * ∈ X * , y ∈ Y * . Put φ = Φ * x * . Firstly suppose that there exists n > 0 such that φ takes its values in a finite-dimensional subspace span(h 1 , . . . , h n ) ⊆ H. Then by bilinearity of covariation process, the definition of Q M1,M2 , and thanks to [35,Theorem 17.11] In the general case one can approximate φ by P n φ, where P n ∈ L(H) is an orthogonal projection on span(h 1 , . . . , h n ), and derive the desired by using (3.12) and inequalities of the type (3.26)-(3.27).
One can prove the full analogues of [35,Lemma 17.10] and [35,Theorem 17.11] using the same methods as in the proof above: x * , Φ * 2 y * are strongly progressively measurable processes for each x * ∈ X * , y * ∈ Y * and assume that EJP 21 (2016), paper 59.
s. Then for all t ≥ 0 and for all x * ∈ X * , y * ∈ Y * one has To construct the analogy one has to see due to the equation above that in a weak sense which extends the scalar case.

Stochastic integration with respect to cylindrical continuous local martingales
Let H be a separable Hilbert space and let X be a separable Banach space with a separable dual space. In the previous section we have introduced stochastic integrals as cylindrical continuous local martingales. Often one wants the stochastic integral to be an actual local martingale instead of a cylindrical one. In this section we will characterize when this is the case we prove two-sided estimates for the stochastic integral where Φ is an L(H, X)-valued H-strongly progressively measurable processes. Here M ∈ M loc var (H) (see Definition 3.4). For this characterization we need the language of γ-radonifying operators and the geometric condition UMD on the Banach space X. Both will be introduced in the next two subsection.

γ-radonifying operators
We refer to [33], [51] and [36] and references therein for further details. Let (γ n ) n≥1 be a sequence of independent standard Gaussian random variables on a probability space (Ω , F , P ) (we reserve the notation (Ω, F, P) for the probability space on which our processes live) and let H be a separable real Hilbert space. A bounded operator R ∈ L(H, X) is said to be γ-radonifying if for some (or equivalently for each) orthonormal basis (h n ) n≥1 of H the Gaussian series n≥1 γ n Rh n converges in L 2 (Ω ; X). We then This number does not depend on the sequence (γ n ) n≥1 and the basis (h n ) n≥1 , and defines a norm on the space γ(H, X) of all γ-radonifying operators from H into X. Endowed with this norm, γ(H, X) is a Banach space, which is separable if X is separable. For later reference we note that the convergence of n≥1 γ n Rh n in L p (Ω ; X) with p ∈ (0, ∞), in probability and a.s. can all be shown to be equivalent.
If R ∈ γ(H, X), then R ≤ R γ(H,X) . If X is a Hilbert space, then γ(H, X) = L 2 (H, X) isometrically. Let G be another Hilbert space, X be another Banach space. Then by the so-called ideal property (see [33]) the following holds true: for all S ∈ L(G, H) and all T ∈ L(X, Y ) we have T RS ∈ γ(G, Y ) and T RS γ(G,Y ) ≤ T R γ(H,X) S . Let µ be a measure on a Borel set J ⊆ R + with a σ-field A such that L 2 (J, µ) is separable and p ∈ [1, ∞). We say that a function Φ : J → L(H, X) belongs to L p (J, µ; H) scalarly if for all x * ∈ X * , Φ * x * ∈ L p (J, µ; H). A function Φ : J → L(H, X) is said to represent an operator R ∈ γ(L 2 (J, µ; H), X) if Φ belongs to L 2 (J, µ; H) scalarly and for all x * ∈ X * and f ∈ L 2 (J, µ; H) we have The above notion will be abbreviated by Φ ∈ γ(J, µ; H, X). In the case X is a Hilbert space, one has γ(J, µ; H, X) = L 2 (J, µ; L 2 (H, X)) isometrically, where L 2 (H, X) denotes the Hilbert-Schmidt operators from H to X.
In the case that ν is the Lebesgue measure the above notion of representability reduces to the one given in [53].

The UMD property
The results will be stated for the important class of UMD Banach spaces and we refer to [11], [32], [71] for details. A Banach space X is called a UMD space if for some (or equivalently, for all) p ∈ (1, ∞) there exists a constant β > 0 such that for every n ≥ 1, every martingale difference sequence (d j ) n j=1 in L p (Ω; X), and every {−1, 1}-valued sequence (ε j ) n j=1 we have The infimum over all admissible constants β is denoted by β p,X . UMD spaces are always reflexive. Examples of UMD space include, the reflexive range of L q -spaces, Besov spaces, Sobolev spaces. Example of spaces without the UMD property include all nonreflexive spaces, e.g. L 1 (0, 1) and C([0, 1]).

Remark 4.2.
The case of scalar-valued continuous local martingales of Theorem 4.1 was considered in [77], where the Dambis, Dubins-Schwarz result is applied to write the continuous local martingale as a time changed Brownian motion. Unfortunately, in the vector-valued setting, this technique breaks down as one cannot do a different time change in infinitely many direction. The proof of Theorem 4.1 will be given in Subsection 4.6 after we have introduced some techniques we will use.

Time transformations
A nondecreasing, right-continuous family of stopping times τ s : Ω → [0, ∞], s ≥ 0, will be called a random time-change. If additionally τ s : Ω → [0, ∞) then τ s , s ≥ 0, will be called a finite random time-change. If F is right-continuous, then according to [35,Lemma 7.3] the same holds true for the induced filtration G = (G s ) s≥0 = (F τs ) s≥0 (see [35,Chapter 7]). An M ∈ M loc var (X) is said to be τ -continuous if for each x * ∈ X * , M x * is an a.   (Kazamaki). Let τ be a finite random time-change and let M ∈ M loc var (H) with respect to F. Let X 0 be a finite dimensional Banach space. Assume also that M is τ -continuous. Let Φ : R + × Ω → L(H, X 0 ) be F-progressively measurable and assume          The next lemma is a γ-version of this substitution result and can be proved as in [77,Lemma 3.5] where the case H = R was considered.

Representation and cylindrical Brownian motion
The next theorem is an infinite time interval version of [53,Theorem 3.6], while the second part is modified thanks to [62,Theorem 5.1] and the last part modified by [53,Theorem 4.4] and [12,Theorem 5.4]. It will play an important role in the proof of Theorem 4.1. It might be instructive for the reader to check that it is exactly Theorem 4.1 in the special case that M is a cylindrical Brownian motion. Theorem 4.6. Let X be a UMD space. For a strongly measurable and adapted process Φ : R + × Ω → L(H, X) which is scalarly in L 2 (R + ; H) a.s. the following assertions are equivalent: (1) There exists a sequence (Φ n ) n≥1 of elementary progressive processes such that: H)), (ii) there exists a process ζ ∈ L 0 (Ω; C b (R + ; X)) such that (2) There exists an a.s. bounded process ζ : In this case ζ in (1) and (2) Moreover, if X is a Hilbert space, then for each progressively strongly measurable Φ :

Proof of the main characterization Theorem 4.1
To prove the result we will reduce to Theorem 4.6 by using the time transformation from Corollary 4.4 and the representation of Proposition 4.7.
Proof of Theorem 4.1. Define τ : Ω × R + → [0, ∞] as follows:  Therefore, it follows that (ζ Ψn Q 1/2 N ) n≥1 is a Cauchy sequence in L 0 (Ω; C b (R + ; X)), and hence it converges to some ζ χ ∈ L 0 (Ω; C b (R + ; X)). By (4.5), Theorem 4.1 (1) (i), by the special choice of Ω and by Fubini's theorem it follows that for every x * ∈ X * we have N h take values in finite dimensional subspace of X for each h ∈ H, one can approximate (Ψ n Q 1/2 N ) n≥1 to obtain a sequence of elementary progressive processes (χ n ) n≥1 that satisfies Theorem 4.6 (1) Let n be fixed. Without loss of generality one can suppose that χ n has the following form: Fix ω ∈ Ω. Let P 0 : R + × Ω → L(H) be the projection onto ran Q 1/2 N (t, ω). It is easy to check that P 0 is scalarly progressively measurable and P 0 ≤ 1. By the ideal property (4.1) one has P-a.e.
N are progressively strongly measurable processes one concludes from  Here the second identity follows from Corollary 4.4.
It follows that ( t 0 Φ n (s) dM (s)) n≥1 is a Cauchy sequence in L 0 (Ω; C b (R + ; X)). Now as in the proof of the previous step one may conclude (1) (ii) via an approximation argument.
(2, Φ) ⇒ (2, Ψ): Let ζ : R + × Ω → X be the given stochastic integral process. Let ζ Ψ : R + × Ω → X be defined as On the other hand since ζ is a.s. bounded, the same holds for ζ Ψ . Therefore, Theorem 4.6 (2) holds for Ψ Q ]. Then ζ ∈ L 0 (Ω; C b (R + ; X)) and it follows from Proposition 4.7 that for all x * ∈ X * , for all t ∈ R + a.s. we have Here the last identity follows from Corollary 4.4.   By the above proof and a limiting argument in L 0 (Ω; C b (R + ; X)) one obtains the following theorem, which can be seen as a vector-valued generalization of the famous Dambis-Dubins-Schwarz theorem (see [35,Theorem 18.4] for the isotropic case in finite dimensions). Theorem 4.9. Let H be a Hilbert space, X be a UMD Banach space, M ∈ M loc var (H), (τ s ) s≥0 be the time change defined as in (4.9). Then we have that there exists an H-cylindrical Brownian motion W H that does not depend on X such that for any Φ : R + × Ω → L(H, X) which is stochastically integrable with respect to M , one has a.s.

Further consequences
During the proof of Theorem 4.1 we have obtained the following corollary, which is absolutely analogous to [77, Corollary 3.9]: Using this corollary one can prove the following analogue of [77, Corollary 3.10]: Corollary 4.11. Let X be a UMD space. For each n ≥ let Φ n : R + × Ω → L(H, X) be stochastically integrable and let ζ n ∈ L 0 (Ω, C b (R + , X)) denote its stochastic integral.
Proof. By Hahn-Banach and strong measurability, it is enough to show that for each x * ∈ X * a.s. in A for all t ≥ 0 But we know that by Remark 3.22 a.s. on A   4.14. Let X be a UMD Banach function space over a σ-finite measure space (S, Σ, µ) and let p ∈ (0, ∞). Let Φ : R + ×Ω → L(H, X) be scalarly progressive and assume that there exists a P × Σ-measurable process φ : R + × Ω × S → H such that for all h ∈ H and t ≥ 0 where the equality holds in X. Then Φ is stochastically integrable with respect to M if and only if almost surely In this case Proof. To prove this statement note that as in [52, Proposition 6.1] and hence the results follows from Theorem 4.1.

Itô's formula
We will say that Φ ∈ γ loc (L 2 (R + ,  A function f : R + × X → Y is said to be of class C 1,2 if it is differentiable in the first variable and twice Fréchet differentiable in the second variable and the functions f , D 1 f , D 2 f and D 2 2 f are continuous on R + × X For R ∈ γ(H, X) and T ∈ L(X, X * ) = B(X, X), where (h n ) n≥1 is any orthonormal basis for H (see [10,Lemma 2.3] for details). The following version of Itô's formula holds Theorem 4.16. Let H be a Hilbert space, X and Y be UMD Banach spaces, M ∈ M loc var (H) and let A : R + × Ω → R be adapted, a.s. continuous and locally of finite variation. Assume that f : R + × X → Y is of class C 1,2 . Let Φ : R + × Ω → L(H, X) be an H-strongly progressively measurable which is stochastically integrable with respect to M and assume that ΦQ ; γ(H, X)). Let ψ : R + × Ω → X be strongly progressively measurable with paths in L 1 loc (R + , A; X) a.s. Let ξ : Ω → X be strongly F 0 -measurable. Define ζ : R + × Ω → X as Then s → D 2 f (s, ζ(s))Φ(s) is locally stochastically integrable with respect to M and almost sure we have for all t ≥ 0 (4.14) A typical application of this formula are the case where f : X → R is given by f (x) = x p whenever this two time Fréchet differentiable and satisfies appropriate estimates (e.g. X = L p with p ≥ 2). Another application is f : X × X * → R given by f (x, x * ) = x, x * .
To prove this result we can reduce to the case Y = R in a similar way as in [10, Theorem 2.4] step 1. Indeed, if the formula holds true for F = R, then we can apply the result to f, y * for each y * ∈ Y . After that we can apply Theorem 4.1 (2) to derive the stochastic integrability of s → D 2 f (s, ζ(s))Φ(s). The identity (4.14) then follows from the Hahn-Banach theorem. The next step is to reduce the proof to the case where ξ is simple and both ψ and Φ have finite dimensional range (see [10, Theorem 2.4] step 2). As soon as we have this reduction, then there exists a fixed finite dimensional subspace H 0 ⊂ H such that H = H 0 ⊕ ker Φ. Then one can restrict M onto this subspace, and thanks to Example   Proof. Let (Φ n ) n≥1 be constructed as in (4.11). Then (Φ n Q  (h 1 , . . . , h k ). So, choosing a subsequence Φ n :=Φ n P kn one derives the desired.

Stochastic evolution equations and cylindrical noise
In this section we study existence and uniqueness of solutions to the stochastic evolution equation on a UMD space X: where u(0) = u 0 . Here A is the generator of an analytic semigroup on X, F and G are nonlinearities and M is a cylindrical continuous local martingale on a Hilbert space H which admits a quadratic variations as introduced in Definition 3.4. We will treat the above problem by semigroup methods. The case M = W H has been extensively studied in the literature (see [8,13,54]). Before we start we need some preliminaries from analysis.

Analytic preliminaries
Let X and Y be two Banach spaces, (r n ) n≥1 be a Rademacher sequence, i.e. a sequence of independent random variables satisfying P(r n = 1) = P(r n = −1) The least such C is called R-bound of T , notation R(T ).
If one replaces the Rademacher sequence by a sequence of independent Gaussian variables in the definition above, then one obtains the notion of γ-bounded family of operators, whose γ-bound is denoted by γ(T ). A simple randomization argument shows that R-boundedness implies γ-boundedness, and in this case γ(T ) ≤ R(T ) and the converse fails in general (see [43]).
A set (Λ, ≤) with an order ≤ is called a set with a total order if for any x, y ∈ Λ it holds true that x ≤ y or y ≤ x. The next result is due to [6] (for a proof see [32]): Lemma 5.1 (Vector-valued Stein's inequality). Let (S, A, µ) be a probability space, X be a UMD space. Let Λ be a set with a total order. Then for all 1 < p < ∞ and every increasing set {A α } α∈Λ of sub-σ-algebras of A one has that the family of conditional expectations E p = {E(·|A α ), α ∈ Λ} ⊆ L(L p (Ω; X)) is R-bounded as a set of operators with an R-bound depending only on p and X.
Proof. Let Ψ(t) = t 0 ψ(s) ds for t ∈ [0, T ]. Let (γ n ) n≥1 be a sequence of standard independent Gaussian random variables on a probability space (Ω , F , P ). Let (φ n ) n≥1 be an orthonormal basis for L 2 (0, T ; µ). Then for a fixed φ ∈ L 2 (0, T ; µ) and n ≥ 1 we can where the latter is defined as a Pettis integral. By Parseval's identity we have where C ψ = sup x * ≤1 ψ, x * L 2 (0,T ) . Taking L 2 (Ω ) norms it follows from the definition of the γ-norm (note that a.s. convergence and convergence in L p (Ω ; X) are equivalent in this setting) that Ψ γ(0,T,µ;X) ≤ C ψ ξ L 2 (Ω ;L 2 (0,T )) ≤ C ψ C T . Let X be a Banach space, (S, A, µ) be a σ-finite measurable space, 1 ≤ p < ∞ and H be Hilbert space. Then one can prove that L p (S; γ(H, X)) γ(H, L p (S; X)). This relation is called γ-Fubini isomorphism (for more information see [33] where (r mn ) m,n≥1 , (r m ) m≥1 and (r n ) n≥1 are independent Rademacher sequences. This property was introduced in a slightly different manner in [65] (see [33] for the proof of the equivalence).
For details on H ∞ -calculus for sectorial operators we refer the reader to [29,41].

Hypotheses and problem formulation
Consider the following hypothesis.
(A0) H is a separable Hilbert space. X is a separable Banach space which has UMD and satisfies property (α). M ∈ M loc var (H). The operator A has a bounded H ∞ -calculus of angle < π/2. Consider the following stochastic evolution equation: du = (Au(t) + F (t, u)) dt + G(t, u) dM, where A is the generator of an analytic C 0 -semigroup (S(t)) t≥0 on X (see [19,63] for details).
We make the following assumption on F and G: (A1) The function F : R + × Ω × X → X is Lipschitz of linear growth uniformly in R + × Ω, i.e., there are constants L F and C F such that for all t ∈ R + , ω ∈ Ω and x, y ∈ X Moreover, for all x ∈ X, (t, ω) → F (t, ω, x) is strongly measurable and adapted in X. (A2) The function G : R + × Ω × X → L(H, X) is Lipschitz of linear growth in a γ-sense uniformly in Ω and T , i.e., there are constants L γ G and C γ G s.t. for all b ≥ a ≥ 0 and for all φ 1 , φ 2 : (A3) The initial value u 0 : Ω → X is strongly F 0 -measurable.
In the case M = W H , the above Lipschitz assumptions reduce to the assumptions in [54]. A key difference with [54] is that the nonlinearities can be defined on interpolation space between X and D(A), but this cannot be done for general martingales except under additional assumptions on [[M ]].

The fix point argument
Consider the fixed point operator    and L T (φ) has a continuous version and Proof. Actually the assumptions even yields that {S(t) : t ≥ 0} is R-bounded and hence γbounded by some constant N (see [41,Theorem 2.20 and 12.8]). In particular S(t) ≤ N for all t ≥ 0.
Let Y = γ(R; X). For the proof we use the following dilation result for the semigroup S from [23]. By the boundedness of the H ∞ -calculus with angle < π 2 yields that there exist J ∈ L(X, Y ), P ∈ L(Y ) and (S(t)) t∈R ⊆ L(Y ) such that (i) There are c J , C J > 0 such that for all x ∈ X, one has c J x ≤ Jx ≤ C J x . (ii) P is a projection onto ran J. (iii) (S(t)) t∈R is a strongly continuous group on Y with S (t)y = y for all y ∈ Y . (iv) For all t ≥ 0 it holds true that JS(t) = PS(t)J. This dilation will be used to derive continuity of the stochastic convolution in a similar way as in [30]. Moreover, we use it to obtain estimates in the γ-norm.
Notice that by [78,Lemma 2.3] Y is a UMD space. Also notice that since X has property (α) then according to [28,Theorem 3.18] family (S(t)) t∈R is γ-bounded by some constant α X . Now we will proceed prove in 4 steps. Fix T ≥ 0. Let C P = P .
Step 1: Estimating the initial value part. By the strong continuity and uniform boundedness of S we derive: Step 2. Estimating the deterministic part. We proceed in two steps.
where in the last step we used Lemma 5.2 and . Then by applying the inequalities above to the paths Ψ(·, ω) one easily obtains that S * Ψ ∈ V p (0, T, M ; X) and (b): Let φ 1 , φ 2 ∈ V p (0, T, M ; X). Since F is of linear growth, F (·, φ 1 ) and F (·, φ 2 ) have a continuous version and belong to V p (0, T, M ; X). Since F is Lipschitz in its X-variable, we deduce that S * F (·, φ 1 ) and S * F (s, φ 2 ) are in V p (0, T, M ; X) and Step 3. Estimating the stochastic part. is well-defined. Now we estimate ζ M pathwise in the space of continuous functions. As before one sees that · 0S −1 JΨ dM is well-defined and is a.s. continuous (here we use the fact that Y is a UMD space andS is γ-bounded). Therefore, by the representation of S as a group it follows that we can write

0S
(−s)JΨ(s) dM (5.11) Since PS(t) is strongly continuous, the continuity follows since J is an isomorphic   where in the last step we argue as below (5.11).
Combining these estimates we conclude that where C 2 T := Step 4: Collecting the estimates. It follows from the previous steps that L T is well-defined on V p (0, T, M ; X) and T and one can check that (5.6) holds.
To prove (5.7) one has to apply (5.13) and the fact that for some positive constant C it holds true that The final continuity statement and (5.8) follows from the previous steps.     Let U 1 ∈ V p (0, T, M ; X), U 2 ∈ V p (0, T, N ; X) be the solutions of (5.2) with initial values u 1 , u 2 ∈ L p (Ω, F 0 ; X) and cylindrical martingales M, N respectively. Finally suppose that M ≡ N a.s. on the set {u 1 = u 2 }. Then a.s. U 1 ≡ U 2 on {u 1 = u 2 }.

Existence and uniqueness when the variation is small
Proof. Let Γ = {u 1 = u 2 }. Since U 2 ∈ V p (0, T, N ; X), then U 2 1 Γ ∈ V p (0, T, M ; X), because M and N coincides on Γ. Consider small t as in the beginning of the proof of Theorem 5.8. Since Γ is F 0 -measurable To extend this result to the whole interval [0, T ] one has to apply the same induction argument as in the end of the proof of Theorem 5.8.
Let b ≥ a ≥ 0. We say that φ is locally in V p (a, b, M ; X) (or simply φ ∈ V p loc (a, b, M ; X)) if there exists a sequence of increasing stopping times (τ n ) n≥1 such that τ n ∞ a.s. and φ ∈ V p (a, b, M τn ; X) for each n > 0. It is evident by Remark s. Letting n to infinity yields U M ≡ U M τ on [0, t ∧ τ ] for a.a. ω ∈ Γ. Now by induction and the same technique as in Lemma 5.9 one obtains the required result.

Proof of the main existence and uniqueness result
We first proof Theorem 5.5 under additional integrability assumptions on the initial value. Proof. By Proposition 5.7 one can find n ∈ N large enough so that T 2 n ≤ 1 4C 2 and T ≤ 2 n . Let ρ = τ 1 4C 2 , where τ s is a stopping time introduced in (4.9). Consider equation (5.2) with the cylindrical martingale M ρ instead of M . It follows from (5.13) that C T 2 n ,M ρ < 1 2 .
Using the Banach fixed point argument one derives that L T 2 n has a unique fixed point U n ∈ V p (0, T 2 n , M ρ ; X). This gives us a continuous progressive measurable process U n : [0, T 2 n ] × Ω → X such that for almost all ω ∈ Ω for all s ∈ [0, T 2 n ], U n (t) = S(s)u 0 + S * F (·, U n ) +   Define ρ n := ρ n1 ∧ . . . ∧ ρ n2 n for each n ∈ N. Then by the fixed point argument, the induction argument and Lemma 5.10, U n = U m on [0, ρ n ∧ ρ m ∧ T ] for all m, n ∈ N. Consequently, since ρ n ∞ a.s. there exists U : [0, T ] × Ω → X such that U = U n on [0, ρ n ∧ T ] for each n ≥ 1. Now one has to show that U is a solution of (5.2). First of all notice that for each fixed t ≥ 0 we know that (U − U n )1 t≤ρn = 0. Consequently (S(t − s)G(s, U n ) − S(t − s)G(s, U ))1 t≤ρn = 0. Then for each fixed t ≥ 0 according to Corollary 4.12 one has that a.s. on {t ≤ ρ n } U (t) = U n (t) = S(t)u 0 + Now assume that V ∈ V p loc (0, T, M ; X) is another solution of (5.2). Then by Lemma 5.10, V = U n on [0, ρ n1 ∧ T 2 n ] for all n ≥ 1. According to (5.15) U n ( T 2 n ) ∈ L p (Ω, F T 2 n ; X), so again by Lemma 5.10 on the set {ρ n1 ≥ T 2 n } V = U n on [ T 2 n , ρ n2 ∧ 2T 2 n ] for all n ≥ 1 (here we start our solutions from the point T 2 n ). Continuing this procedure for k = 3, . . . , 2 n we have that V = U n on [0, ρ n ∧ T ] for all positive n. But since U = U n on [0, ρ n ∧ T ] for all n ≥ 1, V = U on [0, ρ n ∧ T ], therefore on whole [0, T ].
Finally we can prove Theorem 5.5 for general initial values.
Proof of Theorem 5.5. The structure of the proof is the same as in [54,Theorem 7.1]. To prove existence define a sequence (u n ) n≥1 in L p (Ω, F 0 ; X) in the following way: u n = 1 u0 ≤n u 0 . Then by Theorem 5.11 for each n ≥ 1 there exists a unique solution U n ∈ V p loc (0, T, M ; X) of (5.2) with initial value u n . By Lemma 5.10 one can define U : [0, T ] × Ω → X as U (t) = lim n→∞ U n (t) if this limit exists and 0 otherwise. Then U is strongly progressive measurable, and almost surely on { u 0 ≤ n} for all t ∈ [0, T ] we have that U (t) = U n (t). Consequently, U ∈ V (0, T, M ; X) and one can check it is a solution of (5.2).
For uniqueness of the solution we will need the stopping times constructed in the proof of Theorem 5.11. Let U, V ∈ V (0, T, M ; X) be two solutions of (5.2). First of all fix n ≥ 1 and prove that U 1 u0 ≤n = V 1 u0 ≤n . Let U n = U 1 u0 ≤n , V n = V 1 u0 ≤n . Obviously U n and V n are solutions of (5.2) with initial value u 0 1 u0 ≤n .
Let k be large enough such that T 2 k < 1

2C
. For each l ∈ N define a stopping time σ nl as follows: Then U n 1 [0,σ nl ] , V n 1 [0,σ nl ] ∈ V p (0, T 2 k , M ; X). Define (ρ km ) 1≤m≤2 k in the same way as in (5.19). For fixed k one has the following By the standard induction argument one derives that a.s. U n 1 [0,σ nl ∧ρ k ] ≡ V n 1 [0,σ nl ∧ρ k ] on [0, T ]. Now taking k and l to infinity gives us the desired. Since U = lim n→∞ U n and V = lim n→∞ V n , then U = V a.s. and uniqueness is proved.
Proof of Proposition 5.6. This result follows with the same method as for Theorem 5.5.
Note that property (α) can be avoided since A = 0 and hence we can take S(t) =S(t) = I and the γ-boundedness is clear in this case.
Remark 5.12. Using the time change result of Theorem 4.9 one can turn the noise part of the problem (5.2) into a cylindrical Brownian motion. Unfortunately, by using this technique the term Au(t) dt becomes more involved. In particular, one has to use evolution families instead of semigroups, which complicates matters.

A A technical lemma on measurable selections
In the next lemma we show that a certain projection valued function can be chosen in a measurable way. Moreover, we give a representation formula for its inverse which is used in the proof of Theorem 4.1. In [64,Lemma 8.9] a similar measurability result was proved by applying a selector theorem by Kuratowski and Ryll-Nardzewski.
Recall from before that a function F : S → L(H) is called H-strongly measurable if for all h ∈ H, s → F (s)h is strongly measurable. EJP 21 (2016), paper 59. Lemma A.1. Let (S, Σ) be a measurable space and let H be a separable Hilbert space. Let H 0 ⊆ H be a finite dimensional subspace. Let F : S → L(H) be a function such that: 1. F is H-strongly measurable; 2. for all s ∈ S and h ∈ H, F (s) * = F (s) and F (s)h, h ≥ 0.
For each s ∈ S, let P (s) ∈ L(H) be the orthogonal projection onto F (s)H 0 . Then there exist H-strongly measurable functionsP , L : S → L(H) such that P F = F P and LF = P, (A.1) pointwise in S. Moreover,P is a projection.
The operatorP will not be an orthogonal projection in general.
Proof. Let P 0 be the orthogonal projection onto H 0 . For each s ∈ S defineP (s) ∈ L(H) as follows:P (s)P 0 F (s) 2 P 0 h = F (s) 2 P 0 h, for h ∈ H, and setP (s) = 0 on ker P 0 F (s) 2 P 0 . Notice, that there is no contradiction, since if P 0 F (s) 2 P 0 h = 0 for some h ∈ H and s ∈ S, then 0 = P 0 F (s) 2 P 0 h, h = F (s)P 0 h 2 and hence h ∈ ker F (s)P 0 ⊆ ker F (s) 2 P 0 . Since P 0 F (s) 2 P 0 is a finite-rank self-adjoint operator for each s ∈ S, we have H = ker P 0 F (s) 2 P 0 ⊕ran P 0 F (s) 2 P 0 , and thus P 0 F (s) 2 P 0 is a bounded linear operator (see [75,).
In the sequel we suppress the s ∈ S from the formulas. We claim that (i)P h = 0 for each h ∈ H ⊥ 0 ; (ii)P F 2 h = F 2 h for h ∈ H 0 . (iii)P F = F P Property (i) is clear from H ⊥ 0 ⊆ ker P 0 F 2 P 0 . For (ii) note that for every h ∈ H 0 , we can write F 2 h = P 0 F 2 h + (1 − P 0 )F 2 h. Since for all g ∈ H 0 , (1 − P 0 )F 2 h, g = (1 − P 0 )F 2 h, P 0 g = P 0 (1 − P 0 )F 2 h, g = 0, we find that (1 − P 0 )F 2 h ∈ H ⊥ 0 . Thus by (i) and the definition ofP , P F 2 h =P P 0 F 2 h +P (1 − P 0 )F 2 h = F 2 h and (ii) follows. To prove (iii) let g ∈ ran P . Choosing h ∈ H 0 s.t. g = F h we find On the other hand, F P vanishes on the space ker P 0 F = (ran F P 0 ) ⊥ = (ran P ) ⊥ = ker P.
The same holds true forP F . Indeed, since (1 − P 0 )F h ∈ ker P 0 F 2 P 0 , it follows that for h ∈ ker P 0 FP F h =P (1 − P 0 )F h = 0 and this gives (iii).
Next we claim thatP 2 =P . Indeed, for each h ∈ H,P h ∈ ran F 2 P 0 by the definition ofP . Thus by (ii)P F 2 P 0 = F 2 P 0 and thereforeP 2 h =P h.
To prove H-strong measurability fix an orthonormal basis (h i ) k i=1 for H 0 . For each subset α ⊆ {1, . . . , k} there exists a measurable S α ⊆ S such that (F h i ) i∈α is a basis of span(F h i ) 1≤i≤k (because of the strong measurability of F h for each h ∈ H and using the Gramian matrix technique). Notice that if (F h i ) i∈α is a basis of span(F h i ) 1≤i≤k , then (F 2 h i ) i∈α is a basis of span(F 2 h i ) 1≤i≤k . Indeed, let g = i∈α c i F h i be a combination of (F h i ) i∈α with some scalars (c i ) i∈α . If F g = 0, then g ∈ ker F = (ran F ) ⊥ , so g = 0. Let α, β ⊆ {1, . . . , k}. We will say that α < β if i∈α 2 i < i∈β 2 i . If α < β, one has to redefine S α := S α \ S β . After the iterations of this procedure for all pairs α, β ⊆ {1, . . . , k} the sets (S α ) α⊆{1,...,k} will be pairwise disjoint. Now fix α ⊆ {1, . . . , k}. Let (g i ) i∈α be obtained from (P 0 F 2 h i ) i∈α by the Gram-Schmidt process. These vectors are orthonormal and measurable because ( P 0 F 2 h i , P 0 F 2 h j ) i,j∈α are measurable. Moreover, the transformation matrix C = (c ij ) i,j∈α such that has measurable elements. So,P g i =P j∈α c ij P 0 F 2 h j = j∈α c ij F 2 h j . This means that for each h ∈ H the following hold true: which is obviously measurable. Now define L as an operator with values in F (H 0 ) = P (H 0 ) as follows: Then L is well-defined since ker F = ker F 2 . Also for each 1 ≤ i ≤ k and h ∈ H | L(F 2 h), F h i | = | P F h, F h i | = | F h, P F h i | = | F 2 h, h i | ≤ F 2 h .
Since the range of L is finite dimensional and equal to F H 0 , the operator L is bounded. Since H = ran F ⊕ ker F and ker F = ker P we find LF = P .
As before one can show that L is H-strongly measurable, This time fixing α ⊆ {1, . . . , k} one considering the orthogonal basis (g i ) i∈α for span(F h i ) i∈α .