Information loss on Gaussian Volterra process

Gaussian Volterra processes are processes of the form ( X t := (cid:82) T k ( t, s )d W s ) t ∈ T where ( W t ) t ∈ T is Brownian motion, and k is a deterministic Volterra kernel. On integrating the kernel k an information loss may occur, in the sense that the ﬁltration of the Volterra process needs to be enlarged in order to recover the ﬁltration of the driving Brownian motion. In this note we describe such enlargement of ﬁltrations in terms of the Volterra kernel. For kernels of the form k ( t, s ) = k ( t − s ) we provide a simple criterion to ensure that the aforementioned ﬁltrations coincide.


Introduction
Let T be a time interval, say [0, T ] or R, and let (W t ) t∈T be a Brownian motion.
Consider a process of the form X t := T k(t, s)dW s , t ∈ T, (1.1) where k : T × T → R is a Volterra kernel, i.e., a square integrable deterministic function ensuring that the right-hand side of equation (1.1) is well-defined, and such that k(t, s) = 0, t < s. (1.2) In what follows we shall refer to (X t ) t∈T as a Volterra process driven by (W t ) t∈T although we acknowledge here that such a process may receive other names in the literature. The function k will be referred to as the kernel of (X t ) t∈T .
Volterra processes has been successfully applied in different fields such as mathematical finance, electronics, hydrology, network traffic, and telecommunications (see for instance [1,2,3,4,6,9] and references therein). One important reason for this relays on the fact that such processes have a versatile covariance structure that can adjusted by the means of the kernel function k. For instance, we can obtain a Volterra processes with long range dependence by matching covariance of fractional Brownian motion with * Universitat de Barcelona, Gran Via de les Corts Catalanes, 585, E-08007 Barcelona, Spain. E-mail: arturo@ valdivia.xyz 1 2 < H < 1. This can be done by taking T = [0, T ] and the so-called Molchan-Golosov kernel (see [10]) defined by where Γ is the Gamma function, and k H (t, s) := 0 for t < s. Alternatively, we can take T = R and the so-called Mandelbrot-Van Ness kernel (see [8]) defined bȳ Although both alternatives lead to Volterra process with the same covariance structure, the filtrations generated in each case are different -see [7,11]. Indeed, in the former case the natural filtration generated by the Volterra process F X = (F X t ) t∈T coincides with the natural filtration generated by the driving Brownian motion F W = (F W t ) t∈T .
However, in the latter case an information loss occurs on integrating the kernel: whereas This information loss is undesirable from the applications point of view. The purpose of this note is precisely to show how the Volterra process filtration needs to be enlarged in order to recover information given by the driver. In Section 2 an explicit construction of this enlargement is given in Theorem 2.2. Then in Corollary 2.3 we prove that if a kernel k(t, s) does not depend on t and s independently but on only on their difference t − s, then a sufficient condition for F X = F W is that k has a non-vanishing Laplace transform. We conclude the exposition with some examples in Section 3.

The result
For simplicity in the exposition we shall focus on the interval T = [0, T ]. In what follows will consider the Hilbert space L 2 ([0, T ], dx), where dx denotes the Lebesgue measure and the inner product will be denoted by f, , the linear subspace spanned by S will be denoted by sp L 2 ([0,T ],dx) S, and its closure be denoted by sp L 2 ([0,T ],dx) S. Analogous notation will be used for L 2 (Ω, F, P), where we shall denote its inner product by F, G L 2 (Ω,F ,P) := E[GF ]. In addition, let us introduce the following definition.
, dx) then the kernel k is said to be degenerate at T . Moreover, the kernel k is said to be degenerate (resp., non-degenerate) if it is degenerate (resp., non-degenerate) at T for every T ∈ (0, T ].
In these terms, we have the following.
T ] be natural filtration generated by a Volterra process and its driver. In the terminology of Definition 2.1, if V ⊥ k (t) is the orthogonal complement of V k (t), then the following decomposition holds true In particular, if the kernel k is non-degenerate at t then F X t = F W t . We can see that the orthogonal complement of H t (X) is given by Indeed, notice that for every G : where the third equivalence follows from the Itô isometry. Hence G belongs to H ⊥ t (X) if and only if the last identity vanishes for every choice of a j ∈ R and t j ∈ [0, t]. That is equivalent to say that g belongs to the orthogonal complement of V k (t).
With the equation H t (W ) = H t (X) ⊕ H ⊥ t (X) at hand, we can see that that the Brownian motion W can be written as a linear combination of elements from H(X) and H ⊥ (X); and thus F W t = σ F ∈ H t (X), G ∈ H ⊥ t (X) . On the other hand, the σ-algebras F X t = σ (F ∈ H t (X)) and σ G ∈ H ⊥ t (X) = σ t 0 g(s)dW s , g ∈ V ⊥ k (t) are independent due to the independence -as Gaussian random variables-of the elements of H ⊥ t (X) and H t (X). Altogether we get the identity in (2.1).
We finish this section by showing that in some cases it is possible to establish a simple criterion to determine if a given kernel is non-degenerate. In such cases Theorem 2.2 imply that the natural filtration of the Volterra process coincides with that of its driver.
Hereafter L will stand for the Laplace transform.  ∈ (a, b), then the kernel k is non-degenerate. Moreover, F X = F W .
Proof. We shall prove that, for every T ∈ (0, T ], the space V ⊥ k (T ) is trivial so that the kernel k is non-degenerate at T . Let us start by noticing that then it follows from the definition of V k (T ) that in particular we must have  (a, b). In light of [5,Lemma 2.3] this implies that f = 0 as desired.

Examples
We conclude this note with a series of examples. Our first example shows that the aforementioned information loss may occur even when the resulting Volterra process is again a Brownian motion. The rest of the examples are relevant due to their applications.
and let k θ (t, u) denote the correspondent Volterra kernel. One can easily check that (X θ t ) t∈[0,T ] defines a Brownian motion. However, its filtration does not coincide with that of (W t ) t∈[0,T ] . Indeed, for t ∈ (0, T ] and g(x) := a + bx one can see that Suppose we choose a = 0 and set b in such a way that the right-hand side of the equation above becomes zero. Then we have that g belongs to the orthogonal complement of V k θ (t) and, in light of (2.1), we get F X θ t F W t as claimed. This kernel was used by [4] in order to work on an extension of the classical Black-Scholes model for option pricing. In such extension the volatility parameter is not assumed to be constant anymore but a stochastic process having long-memory features and properties.
In order to show that the kernel k H,+ is non-degenerate, it suffices to notice that a standard computation leads to which does not vanish for s in, for instance, the interval (0, 1).  where the parameters α j > 0 are all different, and the weights w j > 0 satisfy n j=1 w j = 1. Let us denote the correspondent kernel by k OU (t, s) := n j=1 w j e −αj (t−s) 1 [0,∞) (t − s). As pointed out by [2], one of the main interests in the process (X supOU t ) t∈[0,T ] defined in (3.1) is that many specialized features can be recovered by a suitable choice of parameters (α j , w j , j = 1, ..., n). In particular, it is possible to construct a limit model exhibiting long-range or quasi long-range dependence.
In order to show k OU is non-degenerate it suffices to notice that L[k OU n (t)](s) = n j=1 w j s + α j does not vanish for s ∈ (0, 1).