On the spatial dynamics of the solution to the stochastic heat equation

We consider the solution of $\partial_t u=\partial_x^2 u+\partial_x\partial_t B,\,(x,t)\in R\times(0,\infty)$, subject to $u(x,0)=0,\,x\in R$, where $B$ is a Brownian sheet. We show that $u$ also satisfies $\partial_x^2 u +[\,(-\partial_t^2)^{1/2}+\sqrt{2}\partial_x(-\partial_t^2)^{1/4}\,]\,u^a= \partial_x\partial_t{\tilde B}$ in $R\times(0,\infty)$ where $u^a$ stands for the extension of $u(x,t)$ to $(x,t)\in R^2$ which is antisymmetric in $t$ and $\tilde{B}$ is another Brownian sheet. The new SPDE allows us to prove the strong Markov property of the pair $(u,\partial_x u)$ when seen as a process indexed by $x\ge x_0$, $x_0$ fixed, taking values in a state space of functions in $t$. The method of proof is based on enlargement of filtration and we discuss how our method could be applied to other quasi-linear SPDEs.


Introduction
When studying stochastic partial differential equations (SPDEs) one has to understand the behaviour of multi-parameter random fields u(x), x ∈ Q, where Q ⊆ R d is a given domain. A first and of course important question deals with the possibility of a Markovian 'behaviour' of such a field. Since Lévy's sharp Markov property (see [L1945]) already fails in the case of multi-parameter Brownian sheets, the only comprehensive Markovian 'behaviour' one can hope for is the so-called germ Markov property-the reader is referred to the early papers [K1961, McK1963] and in particular to [K1970] for a good introduction to this concept.
There is an early paper by Y.A. Rozanov [R1977] on the Markovian 'behaviour' of SPDEs and then there are three influential papers [D-M1992, D-MN1994, NP1994] on the germ Markov property of solutions to SPDEs of type where L is a linear partial differential operator of elliptic or parabolic type and η stands for a multi-parameter noise. The method applied in all three papers consisted in, first, establishing the germ Markov property for the solution of the linear equation Lu = η in Q and, second, getting it for the drift-perturbed equation by a Kusuoka or Girsanov transform. It should be mentioned that [AFN1995] provides another useful method for the second step.
The main method for the first step is usually based on the paper [P1971] which was later improved by H. Künsch [K1979]. For example, the more recent paper [BK2008] on the germ Markov property of the solution of a linear stochastic heat equation is still about checking the conditions stated in [P1971, K1979] which can be demanding.
However, the purpose of our paper is to refine this first step in the following sense: study a more specific Markovian 'behaviour' of solutions of linear SPDEs. Our working example is the stochastic heat equation with additive space-time white noise. But all calculations are based on only two ingredients: • a Green's function for Lu = η in Q; • the covariance of a Gaussian noise η.
So, following our scheme of calculations but using a different Green's function or covariance would produce similar results with respect to other linear SPDEs. The explicit calculations are involved and will be different in other cases. That's why we have to restrict ourselves to the case of an important example in order to show how the method works in the very detail. However, at the end of this introduction, we list the tasks to be dealt with when treating another SPDE.
We now explain what we mean by a Markovian 'behaviour' more specific than the germ Markov property. A random field u(x), x ∈ Q, satisfies the germ Markov property if σ{u(y) : y ∈ A} and σ{u(y) : y ∈ A c } are conditionally independent given the germ-σ-algebra for any Borel set A ∈ Q. This type of Markov property is 'directionless' with respect to the d-dimensional domain Q. But often it is desirable to emphasise a direction in R d and to study the behaviour of an SPDE along this direction. In the case of parabolic SPDEs, a natural direction to emphasise is the direction of 'time' we denote by t in what follows. Many solutions u(x, t) of parabolic SPDEs are even constructed as Markov processes t → u(·, t) taking values in a function space. Hence, along the direction of time, these solutions satisfy a sharp Markov property with an associated martingale problem.
But we want to be able to pick other directions with respect to the space-variable x = (x 1 , . . . , x d−1 ) of u(x, t), for example, the direction of x 1 . Assume we would already know that u(x, t), (x, t) ∈ Q, satisfies the germ Markov property. Then, the process x 1 → u(x 1 , ·) is at least Markovian relative to But this does not give an associated martingale problem with martingales indexed by x 1 hence many useful probabilistic methods cannot be applied. So one wants to know if there is a σ-algebra included in germ ({x 1 } × R d−1 ) ∩ Q so that x 1 → u(x 1 , ·) is still Markovian relative to this smaller σ-algebra but also satisfies an associated martingale problem. And, because we are dealing with SPDEs, it is very likely that a σ-algebra generated by certain partial derivatives of u(x, t) with respect to x 1 would serve the purpose.
Our main result, Theorem 3.17, states that the above can be achieved in the case of the stochastic heat equation (∂ 2 x − ∂ t )u = −∂ x ∂ t B, (x, t) ∈ R × (0, ∞), driven by a Brownian sheet B. We find a martingale problem for the pair of processes (u(x, ·), ∂ x u(x, ·)), x ≥ x 0 , which is given by an unbounded operator we explicitly calculate using the technique of enlargement of filtration.
The need for an enlargement of filtration in this context can be considered the key idea of this paper. The problem is explained in Section 3-the reader is referred to the second equation of (3.9). The observation is that, for a given test function h, there is no filtration such that x → U(x, h ′ ) is adapted with respect to this filtration and x → W x−x 0 (h) is a Wiener process with respect to this filtration. Enlargement of filtration solves this problem subject to a drift correction. But the new drift requires test functions which are less regular than the original test functions h. As a consequence one has to discuss the regularity of all involved processes very carefully.
We are then able to derive, by showing the uniqueness of the martingale problem, the strong Markov property of u(x, ·), x ≥ x 0 , with respect to the natural filtration generated by (u(x, ·), ∂ x u(x, ·)), x ≥ x 0 .
As explained earlier, the same method could be used to find interesting martingale problems associated with other linear SPDEs or even drift-perturbed linear SPDEs, by applying Girsanov's transform for example, the latter being beyond the scope of this paper.
The organisation of the paper is as follows. In Section 2 we list notation which is crucial for the understanding of the paper. Section 3 is a combination of results and further explanations which fully describes our method and can be summarised by: • choose a direction along which one wishes to study the solution u of a stochastic partial differential equation Lu = η in Q subject to given boundary data; • describe the dynamics of Lu = η in Q along the chosen direction-see (3.3); • find the regularity of all involved partial derivatives-see Proposition 3.3; • find a correction ̺ of η such that Lu andη = η − ̺ are both adapted with respect to a filtration along the chosen direction and that the probability law ofη is the same as the law of η-see Proposition 3.6, Lemma 3.8, Remark 4.1(ii); • describe ̺ as a functional of the solution u-see Proposition 3.9, Theorem 3.11; • show uniqueness of the martingale problem along the chosen direction which is associated with the new equation Lu − ̺(u) =η-see Theorem 3.17.
The results are finally proven in Section 4.
Acknowledgement. The authors would like to thank Roger Tribe for many fruitful discussions.

Notation
We use the notation ∂ i for the operation of taking the ith partial derivative, i = 1, . . . , d, of a function f (x 1 , . . . , x d ) and we write ∂ m i for iterating this operation m times, that is, But if the function f only depends on a space variable x ∈ R and a time variable t ≥ 0 and there is no ambiguity about the nature of the involved variables then we will also write ∂ x and ∂ t for the corresponding partial derivatives.
The heat kernel We write g as an inhomogeneous transition kernel in order to emphasise that the method works for more general PDE problems than the heat equation. However for some explicit calculations we are going to apply the time -homogeneous structure of g and then we also use the notation for given x 0 , y ∈ R. We use f 1 * f 2 to denote the convolution of functions f 1 , f 2 : R → R and write f i for their Laplace transform Furthermore, if l is a function on (0, ∞) or [0, ∞) then we denote by l a its antisymmetric extension for each p > 2 by Young's inequality.
The following domains will be frequently used. The symbol D is reserved for C ∞ c (0, ∞) the space of smooth functions on (0, ∞) with compact support and D a def = {h a : h ∈ D}. · ; · denotes the scalar product in L 2 ([0, ∞)) and, whenever the dual pairing between a topological vector space and its dual is an extension of the scalar product in L 2 ([0, ∞)), this dual pairing is also denoted by · ; · .

Results
The stochastic partial differential equation of our interest formally reads is a Brownian sheet given on a complete probability space (Ω, F , IP). Note that the transition kernel g introduced in Section 2 gives the Green's function associated with this Cauchy problem. It is well-known that the random field is the unique (weak) solution to (3.1) where the integral against B(dy, ds) is understood as an Itô-type integral against a process indexed by two parameters. We always mean by U the version which can be continuously extended to the closure of Q + -see [W1986] for a good account on the underlying theory of SPDEs. Due to the parabolic nature of (3.1), the process {U(·, t); t ≥ 0} taking values in the space of continuous functions is a strong Markov process with zero initial condition in the usual sense. But we are after the Markovian behaviour of U(x, ·) as a process indexed by x ≥ x 0 . The method is to construct a (infinite dimensional) stochastic differential equation wich is solved by the pair (U(x, ·), ∂ 1 U(x, ·)) and then to prove that the solution of this stochastic differential equation is Markovian in the ususal sense.
Remark 3.1 It turns out that this Markov process is homogeneous and stationary in the case of (3.1), that is, in the case of zero initial condition. If the initial condition is not zero but a function b 0 then the underlying solution can be written as U(x, t) + R b 0 (y)g(y, 0 ; x, t) dy if b 0 has enough regularity. Adding this deterministic integral gives an inhomogeneous Markov process instead without any extra proof.
The initial idea is to rewrite (3.1) as and to understand this system as an equivalent formulation of the Dirichlet problem subject to a given continuous function on the boundary ∂Q x 0 + . So we are only interested in solutions (u, v) of (3.3) with respect to the domain Q x 0 + where u(x, t) can be extended to a continuous function on Q x 0 + satisfying for given (maybe random) continuous functions 3) in the corresponding weak sense and vice versa.
is the unique (weak) solution of (3.4),(3.5). Using the Green's function associated with this Dirichlet problem, the above existence/uniqueness result is standard -see [W1986] for example. The polynomial growth condition on b 0 is not optimal but sufficient for our purpose. Both, b x 0 and b 0 , can of course be taken to be F -measurable.
Let us return to the solution U of (3.1) given by (3.2). Of course, U is the unique weak solution of (3.4),(3.5) subject to b x 0 = U(x 0 , ·) and b 0 = 0 hence, by Remark 3.2, the pair (U, ∂ 1 U) solves (3.3) at least in the corresponding weak sense.
But this is not enough to justify why (U(x, ·), ∂ 1 U(x, ·)) should be a Markov process indexed by x ≥ x 0 . First one needs a meaning of (U(x, ·), ∂ 1 U(x, ·)) as a process indexed by x ≥ x 0 which boils down to finding an appropriate state space, E, for the random variables (U(x, ·), ∂ 1 U(x, ·)), x ≥ x 0 . Second, a well-posed martingale problem needs to be associated with the system (3.3).
To start with finding the right state space, observe that and are well-defined for all x ≥ x 0 and h ∈ D = C ∞ c (0, ∞) . Since the notation ∂ 0 1 U(x, h) and ∂ 1 1 U(x, h) can be used for U(x, h) and ∂ 1 U(x, h), respectively, we have defined a centred Gaussian process (ii) For fixed x ≥ x 0 , the processes {U(x, h); h ∈ D} and {∂ 1 U(x, h); h ∈ D} are independent centred Gaussian processes with covariances respectively.
(v) The family of random variables {(U(x, ·), ∂ 1 U(x, ·)); x ≥ x 0 } is a stationary process taking values in for all x > x 0 . Furthermore, the process {U(x, ·); x ≥ x 0 } taking values in E 1 has a version such that (x, t) → U(x, t) is continuous on the closure of Q x 0 + . Remark 3.4 (i) The bound 3/2 for the parameter α defining the state space E 1 is sharp in the following sense: for α ≤ 3/2 one cannot apply Lemma 3.5 below in the proof.
(ii) Denote by C 2 the covariance operator [S1970] for example), and the parameter β defining the space E 2 was chosen just big enough to ensure that C 1/2 2 : L 2 ([0, ∞)) → E 2 is a Hilbert-Schmidt operator which, by Sazonov's theorem, is needed for a meaningful state space of a Gaussian measure. The choice of a weighted (Sobolev) space is due to the 'infinite-volume' in t-direction. Finding the right space E 2 in the case of other SPDEs might be more complicated.
The proof of the above proposition uses the following technical lemma. Recall that B = {B ys ; y ∈ R, s ≥ 0} is a Brownian sheet on a given complete probability space (Ω, F , IP). We assign to B a family of σ-algebras where N IP is the collection of all null sets in F . Note that this makes F (−∞,x] , x ∈ R, a right-continuous filtration. Lemma 3.5 (special case of [W1986, Th. 2.6]). Let φ ∈ L 1 (I) where I ⊆ R is a measurable index set and let f : Ω×Q + ×I → R be an F ⊗B(Q + )⊗B(I) -measurable function such that, for each (y, ζ) ∈ R×I, the mapping (ω, s) → f (ω, y, s, ζ) is F (−∞,y] ⊗B((0, ∞)) -measurable.

Now we introduce the random variables
such that the equations of Proposition 3.3(i) can be rewritten as x] -adapted then one could try to establish the Markov property of this process via the martingale problem corresponding to the stochastic differential equation (3.9). But, from (3.6) follows that The intuition behind this is of course that a unique solution to (3.9) should be a functional of the initial data U(x 0 , ·), ∂ 1 U(x 0 , ·) and the driving Wiener process. In our case this can easily be made precise by approximating the derivative h ′ in (3.9) by a bounded operator and showing that theF x -adapted solutions of the approximating systems converge to the unique solution of (3.9). To do so, we would use the connection between (3.9) and (3.4) as explained in Remark 3.2. The wanted convergence can then be verified in a straight forward way using the Green's function given by Remark 3.2(ii).
As a consequence, Note that, in the case of other SPDEs, it can easily happen that one has to enlarge F (−∞,x] by initial conditions with respect to several partial derivatives. However, since {W z (l); z ≥ 0} is not a martingale with respect to the bigger filtratioñ F x 0 +z , z ≥ 0, the equation (3.9) cannot be associated with a martingale problem in a straight forward way, yet. One has to find a semimartingale decomposition of the process {W z (l); z ≥ 0} with respect toF x 0 +z , z ≥ 0, and this problem is dealt with in the next proposition.
First we state the well-known martingale representation theorem for Brownian sheet: if L is an F R -measurable random variable in L 2 (Ω) then there exists an F (−∞,y] -adapted measurable process (λ y · ) y∈R in L 2 (Ω × R ; L 2 ([0, ∞)) such that L a.s.
A good reference for this result in an even more general setting is [N2006, Th.1.4].
As a consequence, any F R -measurable random variable L taking values in a measurable space E is associated with an additive stochastic kernelλ ys (F ) indexed by bounded measurable functions F : E → R such that In what follows let E be a Souslin locally convex topological vector space and denote by E ′ its topological dual. Introduce Assume that there exists a measurable function then, for l = 0, the process {W z (l)/ l L 2 ([0,∞)) ; z ≥ 0} is a standard Wiener process with respect to the filtration F (−∞,x 0 +z] ∨ σ(L), z ≥ 0. Moreover, if ̺ l with the above properties exists for l = l 1 , l 2 then ̺ a 1 l 1 +a 2 l 2 exists for each a 1 , a 2 ∈ R and W z (a 1 l 1 + a 2 l 2 ) a.s.
Remark 3.7 (i) This proposition is a generalisation of Theorem 12.1 in [Y1997] which deals with the semimartingale decomposition of a Wiener process {W t ; t ≥ 0} if its natural filtration F W t , t ≥ 0, is enlarged by the information given by an F W ∞ -measurable random variable. In our case, for fixed l ∈ L 2 ([0, ∞)) \ {0}, the Wiener process {W z (l)/ l L 2 ([0,∞)) ; z ≥ 0} is already a Wiener process with respect to a filtration larger than its natural filtration, that is F (−∞,x 0 +z] , z ≥ 0, and this larger filtration is enlarged further. But we have both there is a martingale representation theorem with respect to F (−∞,x 0 +z] , z ≥ 0, and {W z (l); z ≥ 0} can be represented as a stochastic integral against the F (−∞,x 0 +z] -integrator which is the Brownian sheet. So the idea of proof is the same as in the proof of [Y1997, Th.12.1] so that, in the Proof-Section, we will only deal with the following two elements of the proof: the part where the different type of martingale representation is used and the linearity (3.11).
(ii) The proposition immediately implies that if ̺ l and ̺ ′ l are two functions satisfying all properties stated in the above proposition then , is a continuous martingale and must vanish therefore.
In our case, the role of L in the above proposition is played by U(x 0 , ·) hence, by Proposition 3.3(iii), the corresponding Souslin locally convex space is E 1 . We choose D to be the subset of E ′ 1 separating the points of E 1 . The next lemma identifies a class of l ∈ L 2 ([0, ∞)) such that ̺ l with the properties stated in Proposition 3.6 exists for L = U(x 0 , ·).

12)
which does not depend on x 0 anymore.
The above lemma suggests that ̺ lν (U(x 0 , ·), y) can be written as a sum of operators acting on U(y, ·) and ∂ 1 U(y, ·) respectively. In what follows, we will reveal the explicit nature of such operators.
For an absolutely continuous function h : Remark 3.10 The operators A 1 and A 2 were introduced to simplify the proof of item (ii) of the above proposition.
So, in what follows, we will always assume that the parameter ε > 0 used to determine the weight function w in Proposition 3.3(iv) is less than 1/2. Then, recalling (3.9), Proposition 3.9 implies that {(U(x, ·), ∂ 1 U(x, ·)); x ≥ x 0 } satisfies the equation is a martingale with respect toF x 0 +z , this stochastic differential equation can eventually be associated with a martingale problem. But before we do so let us point out that (3.14), when seen as a family of stochastic differential equations indexed by x 0 , gives raise to a new SPDE in Q + .
Theorem 3.11 The unique weak solution U to (3.1) given by the continuous version of the right-hand side of (3.2) on page 5 satisfies Remark 3.12 In this theorem, U is considered a regular generalised function on C ∞ c (Q + ), that is, However, the proof of Proposition 3.9(i) makes clear that, for fixed x ∈ R and f ∈ C ∞ c (Q + ), the long-time behaviour of (− is based on an extension of the regular generalised function U which will be explained in the proof of the theorem. Coming back to the martingale problem associated with (3.14), choose D = D × D, which is a subset of the topological dual of E = E 1 × E 2 , and denote by A the subset of C b (E) × C(E) whose elements (F, G) are given by This definition of the subset A of course requires (−∂ 2 t ) 1 2 h a ∈ E ′ 1 and (−∂ 2 t ) 1 4 h a ∈ E ′ 2 for h ∈ D which follows from Proposition 3.9(i) and Remark 3.10.
Then, according to [EK1986, Chapter 3], by a solution of the martingale problem for A with respect to F z one would mean an F z -progressively measurable process R = (R z ) z≥0 on a filtered probability space (Ω, F , IP) taking values in E such that is a martingale with respect to the filtration F z , z ≥ 0, for all (F, G) ∈ A. When an initial condition µ is specified, it is also said that the process R is a solution of the martingale problem for (A, µ).
Next we check whether {(U(x 0 +z, ·), ∂ 1 U(x 0 +z, ·)); z ≥ 0} is a solution of the martingale problem for our set A with respect toF x 0 +z .
First, the process {(U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)); z ≥ 0} has anF x 0 +z -progressively measurable version because it isF x 0 +z -adapted and, by Proposition 3.3(v), has an F ⊗ B([x 0 , ∞)) -measurable version taking values in the space E = E 1 × E 2 . This can be verified the same way the analogous statement for real-valued adapted measurable processes was verified in [CD1965]. Notice that, by construction, the filtrationF x inherits right-continuity from the filtration F (−∞,x] defined on page 8. Second, knowing that {(U(x, ·), ∂ 1 U(x, ·)); x ≥ x 0 } satisfies (3.8) and (3.14), an easy application of Itô's formula to R z = (U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)) yields that is indeed a martingale with respect toF x 0 +z for all (F, G) ∈ A. Now we hope that the well-posedness-condition for each probability measure µ on (E, B(E)), any two solutions R, R ′ of the martingale problem for (A, µ) with respect to F z , F ′ z , have the same one-dimensional distributions, that is, for each z > 0, as stated in [EK1986, Thm.4.2] is enough to ensure that a solution (R z ) z≥0 of the martingale problem is strong Markov in the sense of: Definition 3.13 Let E be a separable metric space and let µ be a probability measure on (E, B(E)). An F z -progressively measurable process (R z ) z≥0 on a filtered probability space (Ω, F , IP) taking values in E is said to be a strong Markov process with initial condition µ if (i) IP({R 0 ∈ Γ}) = µ(Γ) for all Γ ∈ B(E); (ii) for any F z -stopping time ξ ≥ 0, y ≥ 0 and Γ ∈ B(E), To show the strong Markov property of R z = (U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)) with initial condition IP • (U(x 0 , ·), ∂ 1 U(x 0 , ·)) −1 we want to apply Thm.4.2(b) in [EK1986]. But the conclusion of this theorem is stated under the extra conditions that A ⊆ C b (E) × C b (E) and that (R z ) z≥0 has a right-continuous version taking values in E.
Remark 3.14 (i) Our set A defining the martingale problem is not a subset of only. However, in the general situation of [EK1986, Thm.4.2(b)], the boundedness of F and G is the natural condition to ensure that . It turns out that Thm.4.2(b) in [EK1986] remains valid when adding a condition of type (3.8) to the definition of the martingale problem-see Definition 3.15(iii) and Remark 3.16(ii) below.
(ii) Taking another look at the proof of Thm.4.2(b) in [EK1986] reveals that the rightcontinuous version of the solution is only needed for to be a right-continuous martingale when (F, G) ∈ A in order to be able to apply Doob's optional sampling theorem. So, it is already enough to require right-continuity of z → F (U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)) for all F ∈ FC ∞ b (D) to make the theorem work in our case.
It is easy to realise that there is a version of the process {(U(x, ·), ∂ 1 U(x, ·)); x ≥ x 0 } such that z → F (U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)) is continuous for all F ∈ FC ∞ b (D). First, it is well-known (see [W1986] for example) that the process {W z (l); (z, l) ∈ [0, ∞) × L 2 ([0, ∞))} defined on page 9 has a version such that {W z (·); z ≥ 0} is a continuous D ′ -valued process. Second, using the above D ′ -valued version of {W z (·); z ≥ 0} and the So, by Thm.4.2(b) in [EK1986] and Remark 3.14, the strong Markov property of our process {(U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)); z ≥ 0} becomes a direct implication of the wellposedness-condition (wp). But, for showing the uniqueness wanted in (wp), we need to work with a more restrictive martingale problem than Ethier/Kurtz in [EK1986]. Recall the set A ⊆ C b (E) × C(E) introduced on page 13. Definition 3.15 An F z -progressively measurable process {(u z , v z ); z ≥ 0} on a filtered probability space (Ω, F , IP) taking values in E = E 1 ×E 2 is called a solution of the martingale problem for A with respect to F z iff (i) the mappings z → u z (h) and z → v z (h) are continuous for all h ∈ D; (ii) the map (z, t) → u z (t) is continuous on the closure of Q 0 + ; Remark 3.16 (i) We also use the phrase 'solution of the martingale problem for (A, µ)' when a specific initial condition µ is emphasised as in (wp).
(ii) We claim that Thm.4.2(b) in [EK1986] remains valid with respect to our more restrictive definition of the martingale problem when being applied to show the strong Markov property of {(U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)); z ≥ 0}. A quick glance at the proof of this theorem shows that one only has to pay attention to the property (iii) of our Definition 3.15 and this will be done in the next item of this remark.
(iii) Adapting the proof of Thm.4.2(b) in [EK1986] to our setup, fix a finiteF x 0 +z -stopping time ξ, choose Ξ ∈F x 0 +ξ such that IP(Ξ) > 0 and introduce for all Γ ∈ F . The task is to show the property in Definition 3.15(iii) if E is replaced by the expectation operators E 1 and E 2 given by the measures IP 1 and IP 2 , respectively. But, for fixed z > 0, we obtain that for i = 1, 2 where, by Proposition 3.3(v), the last term is finite if the stopping time ξ is bounded. And it is sufficient to check the strong Markov property for bounded stopping times only-we refer to Problem 2.6.9 in [KS1991] for example.
After this preparation, the key part of the proof of the below theorem consists in verifying the well-posedness-condition (wp) on page 14 for our martingale problem. Recall the spaces E 1 , E 2 defined in Proposition 3.3 and assume that the parameter ε > 0 used to define E 2 is less than 1/2.
Theorem 3.17 TheF x 0 +z -progressively measurable version of the process {(U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)); z ≥ 0} taking values in E 1 × E 2 is a stationary homogeneous strong Markov process which is associated with the martingale problem of Definition 3.15 via a pathwise unique stochastic differential equation in E 1 × E 2 which can be formally written as where {W z ; z ≥ 0} stands for a D ′ -valued Wiener process.
Corollary 3.18 (i) The unique weak solution U(x, t) to (3.1), when seen as a process U(x, ·) indexed by x ≥ x 0 taking values in E 1 , satisfies for any finiteF x -stopping time ξ ≥ x 0 and any y ≥ 0, Γ ∈ B(E 1 ). This remains valid when the filtrationF x , x ≥ x 0 , is replaced by the filtration generated by the process {(U(x, ·), ∂ 1 U(x, ·)); x ≥ x 0 } augmented by the IP-null sets in F .

Proofs
Proof of Proposition 3.3. For (i) fix (x, h) ∈ [x 0 , ∞) × D and notice that by the definition of ∂ 1 U(y, h). The integral on the right-hand side a.s. equals by applying Lemma 3.5 with respect to the bounded function φ = 1 (x 0 ,x] . Here the condition of Lemma 3.5(ii) is easily satisfied because the covariance of ∂ 1 U(y, h) given in Proposition 3.3(ii) does not depend on y. Then the equation for U(x, h) follows from (4.1) by Fubini's theorem with respect to dtdy which can be applied for every (y ′ , s ′ ) ∈ Q + because x x 0 ∞ 0 |∂ 3 g(y ′ , s ′ ; y, t)h(t)| dtdy < ∞.
In order to show the equation for ∂ 1 U(x, h) we first calculate: Again applying Fubini's Theorem and our stochastic Fubini Lemma 3.5, one sees that holds true by the strong continuity of the heat semigroup in L 2 (R). For proving item (ii) of the proposition fix x ≥ x 0 and h 1 , h 2 ∈ D. Then g(y, s ; x, t)g(y, s ; x, t ′ ) dsdy.
We only mention that, by the well-known properties of the Green's function g, the integrability conditions needed for the above calculation are satisfied in the case of test functions h 1 , h 2 with compact support.
The covariance of the process {∂ 1 U(x, h); h ∈ D} can be verified by a similar calculation since ∂ 3 g(y, s ; x, t)∂ 3 g(y, s ; x, t ′ ) dsdy and R ∞ 0 g(y, s ; x, t)∂ 3 g(y, s ; x, t ′ ) dsdy = 0 for all t, t ′ ≥ 0 gives the independence of the two processes. We now show part (iii) of the proposition. Fix x ≥ x 0 and α > 3/2. If h ∈ C 0,α then because α > 3/2. Hence, by Lemma 3.5, the process {U(x, h); h ∈ D} can be extended to h ∈ C 0,α and But the above calculation also shows that As a consequence, there is a version of the process {U(x, h); h ∈ D} taking values in (C 0,α ) ′ . Since U given by (3.2) is continuous in (x, t) ∈ Q + such that lim t↓0 U(x, t) = 0 for all x ≥ x 0 , this version takes values in E 1 even. Next we prove item (iv) of Proposition 3.3. Note that the Sobolev space H β can be identified with (Id − ∂ 2 t ) −β/2 L 2 (R) in the sense of generalised functions. Fix β > 1/4 and define the operator Kh = (Id − ∂ 2 t ) −β/2 (w −1 h a ), h ∈ D. Of course, K −1 exists and it holds that . The above implies that if the linear form ∂ 1 U(x, ·) can be extended to the linear hull of defines a version of ∂ 1 U(x, ·) taking values in (H a w,β ) ′ . In order to show (4.5) recall from Remark 3.4(ii) that C 2 h = const(−∂ 2 t ) − 1 4 h a . First, applying Proposition 3.3(ii), we have that 1 for each single i since |w −1 e i | ≤ |e i | and e i ∈ D. As a consequence, ∂ 1 U(x, ·) can be extended to the linear hull of D ∪ {Ke i } ∞ i=1 and the left-hand side of (4.5) makes sense. Taking into account the calculations of the last paragraph, condition (4.5) becomes equivalent to since β > 1/4. We finally justify item (v) of Proposition 3.3. First, the stationarity follows from item (ii) because the covariances do not depend on x ≥ x 0 . Second, (ω, x, t) → U(ω, x, t) is clearly F ⊗ B([x 0 , ∞)) ⊗ B([0, ∞)) -measurable leading to an F ⊗ B([x 0 , ∞)) -measurable version of x → U(x, ·) ∈ E 1 and the version of (ω, x) → ∂ 1 U(ω, x, ·) given by (4.6) is also F ⊗ B([x 0 , ∞)) -measurable. Third, using stationarity, (3.8) already follows from for an arbitrary but fixed x ≥ x 0 where the first expectation is finite because of (4.3),(4.4) and the second expectation is equal to the left-hand side of (4.5) which was shown to be finite above. Finally, the existence of a continuous version on the closure of Q 0 + of the solution (x, t) → U(x, t) as given by (3.2) is standard-see [W1986].
Proof of Proposition 3.6. Recalling Remark 3.7(i), we only deal with the following two issues and refer to Theorem 12.1 in [Y1997] otherwise.
First, after several steps, one has to identify But, by the assumptions on ̺ l made in the proposition, it holds that where, by the martingale representation of F (L), the above right-hand side is the wanted covariation. Second, if ̺ l exists for l 1 , l 2 ∈ L 2 ([0, ∞)) and if a 1 , a 2 ∈ R then ∞ 0λ ys (F ) (a 1 l 1 + a 2 l 2 )(s) ds a.s.
But (ω, φ, y) → a 1 ̺ l 1 (ω, φ, y) + a 2 ̺ l 2 (ω, φ, y) also satisfies the other properties of ̺ l stated in the proposition hence it can be taken to be ̺ a 1 l 1 +a 2 l 2 . Then the linearity (3.11) follows from the uniqueness of ̺ l explained in Remark 3.7(ii). Note that (3.11) is not required for the argument given in Remark 3.7(ii).
Proof of Lemma 3.8. Fix ν > 0 and observe that, by change of variable (t ′ = tr), the test function l ν can be represented as Thus, since the function r → |1 − r| −1/2 − (1 + r) −1/2 p on [0, ∞) is integrable for all 1 ≤ p < 2, Lebesgue's dominated convergence theorem implies both the continuity of l ν and l ν (t) → 0 if t tends to zero. Moreover, The next step is to identify ̺ lν (φ, y) as given in the lemma so, in particular, we have to show (3.10). Fix y > x 0 and F ∈ FC ∞ b (D) given by . . , n, for some n ≥ 1. Since U(x 0 , ·) is a stochastic integral against the Brownian sheet B with deterministic integrand, the Malliavin derivative D ys F (U(x 0 , ·)) exists and can explicitly be given by Furthermore, by Clark-Ocone's formula, we have the identitẏ is treated as a function on R and g x 0 y was defined in Section 2.
Since U(x 0 , ·) − U(x 0 , ·) y and F (−∞,y] are independent, the last sum of conditional expectations simplifies to where µ y denotes the image measure of U(x 0 , ·) − U(x 0 , ·) y on E 1 equipped with the Borelσ-algebra. Remark that, similar to the proof of Proposition 3.3(iii), there are versions of U(x 0 , ·) y and U(x 0 , ·) − U(x 0 , ·) y taking values in E 1 . Now introduce the function where the right-hand side is the Gâteaux derivative into the direction g x 0 y * l 0 ν defined by d dr Note that this requires g x 0 y * l 0 ν ∈ E 1 which can easily be verified using the explicit structure of g x 0 y because l 0 ν is bounded. Having found that ∞ 0λ ys (F ) l ν (s) ds a.s.
we now want to justify that if the direction g x 0 y * l 0 ν is in the Cameron-Martin space H y of the Gaussian measure µ y with covariance C y : E ′ 1 → E ′′ 1 . Remark 4.1 (i) We refer to [B1998] being a good reference for the theory of Gaussian measures on infinite-dimensional spaces. The covariance C y : E ′ 1 → E ′′ 1 can be extended to the reproducing kernel Hilbert space H ′ y of µ y and C y acts on H ′ y as an isomorphism between H ′ y and the Cameron-Martin space H y , H y ⊆ E 1 ⊆ E ′′ 1 . So, checking if g x 0 y * l 0 ν ∈ H y can be done by finding a solution m y ν ∈ H ′ y of the equation g x 0 y * l 0 ν = C y m y ν (4.10) and this will be the next step of the proof.
(ii) It is clear that we have chosen l ν in a way that (4.10) can be solved in H ′ y . When applying our method with respect to other linear SPDEs with additive Gaussian noise then one has to study an equation of similar type, i.e.
where g x 0 y comes from the Green's function associated with the SPDE and C y is the covariance of some Gaussian measure. The task is then to identify the 'good' test functions l for which such an equation can be solved.
We will show that g x 0 y * l 0 ν = C y e −ν · / g x 0 y (ν) which also implies that the direction g x 0 y * l 0 ν must be in H y because e −ν · ∈ E ′ 1 ⊆ H ′ y and C y : H ′ y → H y is an isomorphism. Let's show the claimed equality. As C y is the covariance of the image measure of the random variable U(x 0 , ·) − U(x 0 , ·) y taking values in E 1 given by for allν > 0 proving C y e −ν · / g x 0 y (ν) = g x 0 y * l 0 ν in the end.
The above allows us to use on the right-hand side of (4.9) leading to Because of (4.8), this justifies that ̺ lν (φ, y) as given in Lemma 3.8 satisfies (3.10). It also satisfies the measurability conditions stated in Proposition 3.6 and it only remains to show that ̺ lν (U(x 0 , ·), y) ∈ L 1 (Ω) for almost every y ≥ x 0 and that y → ̺ lν (U(x 0 , ·), y) is in where the equality in the last line is obtained by manipulations similar to the lines of proof of Proposition 3.3(ii). We finally proof (3.12). On the one hand, for fixed y ≥ x 0 , we have that On the other hand, it also holds that U(y, √ νe −ν · ) + ∂ 1 U(y, e −ν · ) Remark 4.2 The last part of the above proof also shows that Proof of Proposition 3.9. Fix h ∈ D. Then h a is infinitely often differentiable but with compact support in R. So, the function (A 2 h) a defined by convolution is a C ∞ -function. Next, choose an upper bound c h for the support of h and fix t > c h . Then (h a ), the claim that ∂ k t A 2 h ∈ C 0,α for 0 ≤ α < k + 3/2, k = 1, 2, . . . , can be shown exactly the same way only using w,β recall that w is a smooth weight function such that, for some ε > 0, w ≥ 1 + | · | 1 2 +ε but w = 1 + | · | 1 2 +ε outside a neighbourhood of zero. Hence, w,0 if and only if w(A 2 h) a ∈ L 2 (R) hence, assuming ε < 1/2, it remains to discuss the case β > 0. Denote by ⌈β⌉ the smallest integer larger than β. Then proving A 2 h ∈ H a w,β . Note that ε < 1/2 is needed for the finiteness of the first summand in the last line, again.
It remains to prove the convergence of the three sequences in (4.15). Fix z ≥ 0. First, the convergence follows from (4.14) using the definition on page 9 of W z (l) for l ∈ L 2 ([0, ∞)) as a stochastic integral. Second, applying Proposition 3.3(ii), we obtain that So we need A 1 (e n − A 2 h) 0,α → 0, n → ∞, for some α > 5/4. Fix α > 5/4. By Proposition 3.9(i), this α should be less than 2 to ensure that A 1 (e n − A 2 h) ∈ C 0,α . Then where the last supremum is finite for α < α 0 − 1/2 by manipulations similar to how (4.11) was derived. Recall that we can choose α ∈ (5/4, 2) and α 0 ∈ (2, 5/2) so that the convergence of the second sequence in (4.15) follows from (4.13). Third, applying Proposition 3.3(ii) once more, we obtain that where the right-hand side converges to zero by (4.12),(4.13) and (4.14) which completes the discussion of the convergence of the sequences in (4.15).
Proof of Theorem 3.11. In this proof one has to differ between the regular generalised function U on C ∞ c (Q + ) given by the continuous version of the right-hand side of (3.2) and the version of the process {(U(x, ·), ∂U(x, ·)); x ≥ x 0 } provided by Proposition 3.3(v).
= − R f ′ (x) U(x, h) dx hence, by partial integration, the first equation of (3.14) yields for all (f, h) ∈ C ∞ c (x 0 , ∞) × D almost surely. Next, when integrating both sides of the second equation of (3.14) by −f ′ , we obtain that where we have used (4.2), the first equation of (3.14) and partial integration. Here, by Proposition 3.3(iii) and Proposition 3.9(i), the first summand on the right-hand side has for the second summand then, by Remark 3.12, the meaning of U needs to be extended.
Recall that a measurable version of the process {∂U(x, ·); x ≥ x 0 } taking values in E 2 was chosen at the beginning of the proof. Furthermore, let Ω 0 ⊆ Ω be such that, on Ω 0 , both holds true: a ) dx is continuous on Ω 0 so that, for each ω ∈ Ω 0 , the map Then, by continuity, the last equation can be extended to for all f ∈ C ∞ c (Q x 0 + ) almost surely. Observe that the above left-hand side does not depend on the choice of x 0 hence, if the same Brownian sheetB can be used for all x 0 , then (4.19) can easily be extended to hold for all f ∈ C ∞ c (Q + ) almost surely proving the theorem. It remains to construct the Brownian sheetB. By Proposition 3.9, in particular using (3.13), the process R f ′ (x)W x−x 0 (h) dx indexed by (f, h) ∈ C ∞ c (x 0 , ∞) × D is a centred Gaussian process with covariance R ∞ 0 f 1 (x)h 1 (t)f 2 (x)h 2 (t) dtdx which can be represented by independently of x 0 . Thus, the process η extends to a centred Gaussian process indexed by L 2 (R) × L 2 ([0, ∞)) and the continuous version of the process {η(sgn(x)1 [x∧0,x∨0] , 1 [0,t] ); (x, t) ∈ Q + } gives the wanted Brownian sheetB.
Proof of Theorem 3.17. According to the sequence of arguments laid down in the Results-Section between Theorem 3.11 and Theorem 3.17, there is a version of the process {(U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)); z ≥ 0} which satisfies all conditions of Definition 3.15. Hence, by Thm.4.2(b) in [EK1986] and Remark 3.14, if (wp) on page 14 holds true for the martingale problem of Definition 3.15 then {(U(x 0 + z, ·), ∂ 1 U(x 0 + z, ·)); z ≥ 0} is a strong Markov process in the sense of Definition 3.13. Moreover, it is stationary by Proposition 3.3(v).
The condition (wp) will be shown in two steps. First, for an arbitrary solution {(u z , v z ); z ≥ 0} to the martingale problem of Definition 3.15, we prove that u z ; h a.s.
for all (z, h) ∈ [0, ∞) × D where {W z ; z ≥ 0} is a D ′ -valued Wiener process with respect to the filtration F z and, second, we verify that the above stochastic integral equation has a pathwise unique solution satisfying the conditions (i),(ii),(iii) of Definition 3.15. This indeed shows (wp) because pathwise uniqueness of stochastic differential equations implies weak uniqueness and weak uniqueness is sufficient for the uniqueness of the one-dimensional marginal distributions. The first step is fairly standard and we only sketch the key idea. Also, the filtration to be considered for all martingales and Wiener processes mentioned below is F z , z ≥ 0.
Define F N ∈ FC ∞ b (D) by h ∈ D and f N ∈ C ∞ b (R) satisfying f N (x) = x for x ∈ [−N, N] and sup x∈R |f N (x)| ≤ N + 1, N = 1, 2, . . . Then, using both Definition 3.15(iv) with respect to F N and stopping times inf{z ≥ 0 : | u z ; h | + | v z ; h | ≥ N}, the process { u z ; h − u 0 ; h − z 0 v y ; h dy; z ≥ 0} can be shown to be a continuous local martingale with quadratic variation zero which proves the first equation of (4.20).
In a similar way one shows that, for each h ∈ D, the process 1 4 h a dy; z ≥ 0} is a continuous local martingale with quadratic variation z h 2 L 2 ([0,∞)) . Here one also needs that the stochastic integral of an adapted continuous process against a continuous local martingale always exists.
As a consequence, by the martingale characterisation of the standard Wiener process, for each h ∈ D, there is a continuous process {W z (h); z ≥ 0} on (Ω, F , IP) such that the second equation of (4.20) is satisfied for all z ≥ 0 almost surely and {W z (h)/ h L 2 ([0,∞)) ; z ≥ 0} is a standard Wiener Process if h = 0. Of course, W z (h) inherits the linearity W z (a 1 h 1 + a 2 h 2 ) a.s. = a 1 W z (h 1 ) + a 2 W z (h 2 ) for each z ≥ 0, a 1 , a 2 ∈ R, h 1 , h 2 ∈ D from the process {(u z , v z ); z ≥ 0} taking values in E 1 × E 2 . Hence, by standard theory -see [W1986] for example, there is a version of the centred Gaussian process W z (h) indexed by (z, h) ∈ [0, ∞) × D such that both the map D → R : h → W z (h) is an element of D ′ for each z ≥ 0 and the map [0, ∞) → D ′ : z → W z is continuous. This means that {W z ; z ≥ 0} can indeed be considered a D ′ -valued Wiener process and the first step of proving (wp) is done.
It remains to show the pathwise uniqueness of the system of equations (4.20). So assume that two F z -progressively measurable processes {(u 1 z , v 1 z ); z ≥ 0} and {(u 2 z , v 2 z ); z ≥ 0} on (Ω, F , IP) taking values in E 1 × E 2 satisfy: • u 1 0 = u 2 0 and v 1 0 = v 2 0 ; • the equation for all (z, h) ∈ [0, ∞) × D. We want to show that u ≡ 0 almost surely. First, by the same principles applied in the proof of Theorem 3.11, one can justify that for all f ∈ C ∞ c (Q + ) almost surely where, in this context, u stands for the regular generalised function given by u z (t), (z, t) ∈ Q 0 + , and f | Q 0 + denotes the restriction of f to Q 0 + . Notice that, because f | Q 0 + does not have compact support in Q 0 + for general f ∈ C ∞ c (Q + ), one also has to approximate f | Q 0 + by functions from C ∞ c (Q 0 + ) when showing (4.21).
Second, since the map (z, t) → u z (t) is continuous on the closure of Q 0 + and zero on the boundary of Q 0 + , u(z, t) def =    0 : z < 0, t ∈ R u(z, t) : z ≥ 0, t ≥ 0 −u(z, −t) : z ≥ 0, t < 0 defines a continuous function on R 2 . Furthermore, for f ∈ C ∞ c (Q + ), we have that