Second quantisation for skew convolution products of measures in Banach spaces

We study measures in Banach space which arise as the skew convolution product of two other measures where the convolution is deformed by a skew map. This is the structure that underlies both the theory of Mehler semigroups and operator self-decomposable measures. We show how that given such a set-up the skew map can be lifted to an operator that acts at the level of function spaces and demonstrate that this is an example of the well known functorial procedure of second quantisation. We give particular emphasis to the case where the product measure is infinitely divisible and study the second quantisation process in some detail using chaos expansions when this is either Gaussian or is generated by a Poisson random measure.


Introduction
In recent years there has been considerable interest in skew-convolution semigroups of probability measures in Banach spaces and the so-called Mehler semigroups that they induce on function spaces. These objects arise naturally in the study of infinite dimensional Ornstein-Uhlenbeck processes driven by Banach-space valued Lévy processes. Such processes have attracted much attention as they are the solutions of the simplest non-trivial class of stochastic partial differential equations driven by additive Lévy noise (see [1,7,29]). The first systematic study of Mehler semigroups in their own right were [6] and [14] with the former concentrating on Gaussian noise while the latter generalised to the Lévy case. Harnack inequalities were obtained in [31] and the infinitesimal generators were found in [4]. From a different point of view, skew-convolution semigroups also appear naturally in the investigation of continuous state branching processes with immigration [11] and more general affine processes [10].
In this paper we focus on the representation of Mehler semigroups as second quantised operators. Such a result has been known for a long time in the Gaussian case. It was first established for Hilbert space valued semigroups in [8] and then extended to Banach spaces in [23]. Once such a representation is known it can be put to good use in proving key properties of the semigroup such as compactness and smoothness [8], symmetry [9], analyticity [15,22], and in the computation of their L p spectra [24]. When the semigroups act on Hilbert spaces, the desired second quantisation representation was recently obtained in [28] in the pure jump case using chaotic decomposition techniques from [20], under the assumption that the Ornstein-Uhlenbeck process has an invariant measure. This paper extends that result to the Banach space case and obtains the second quantisation representation without needing to assume the existence of an invariant measure.
In fact, within the main part of our paper we dispense with Mehler semigroups altogether and work with a more general structure which we introduce herein. For this we require that there are measures µ 1 on a Banach space E 1 and µ 2 and ρ on a Banach space E 2 which are related by the identity where T : E 1 → E 2 is a Borel mapping and * is the usual convolution of measures. An operator T that has such an induced action is precisely a skew map as featured in the abstract of this paper. Note that if E 1 = E 2 = E and µ 1 = µ 2 = µ say, then µ is an operator self-decomposable measure and such objects have been intensely studied (see e.g. [18,19,33].) The invariant measures arising in [28] are precisely of this form. On the other hand a skew convolution semigroup of measures (µ t , t ≥ 0) with respect to a C 0 -semigroup (S(t), t ≥ 0) is characterised by the relations µ s+t = S(t)µ s * µ t and these are clearly also examples of our structure. At our more general level, the antecedent of a Mehler semigroup is a bounded linear operator P T which acts from L 2 (E 2 , µ 2 ) to L 2 (E 1 , µ 1 ). Our main result is then to show that this operator can be seen as a second quantisation of the adjoint T * : E 2 → E 1 in a natural way in the case where µ 1 and µ 2 are both infinitely divisible and either Gaussian or of pure jump type.
A key part of our approach is the use of a family of vectors that we call exponential martingale vectors. We now explain how these arise and contrast them with the more familiar exponential vectors (see e.g. [3,27]). Second quantisation is seen most naturally as a covariant functor Γ within the category whose objects are Hilbert spaces and morphisms are contractions (see e.g. [27]). If H is a Hilbert space and Γ(H) is the associated symmetric Fock space, the set of exponential vectors is linearly independent and total in Fock space. If we are given a Gaussian field over H then the exponential vectors correspond to the generating functions of the Hermite polynomials, and from the point of view of stochastic calculus they correspond both to the Doléans-Dade exponentials and to the exponential martingales. When we consider Lévy processes, the latter symmetry is broken. Exponential vectors still correspond to Doléans-Dade exponentials (see [3]) but these are no longer exponential martingales. In this paper, we find that a natural context for defining second quantisation in a non-Gaussian context is to employ vectors that are natural generalisations of exponential martingales, rather than using exponential vectors themselves. Hence we call these exponential martingale vectors. In particular, as we show in Section 2 and the appendix, these are still both total and linearly independent.
Notation. Throughout this article, E is a real Banach space. The space of all bounded linear operators on E is denoted by L (E) and the dual of E is denoted by E * . The action of E * on E is represented by x * (x) = x, x * . Whenever we consider measures on a Banach space E, they are defined on the Borel σ-algebra B(E). If µ is a Borel measure on E and T : E → F is a Borel mapping from E into another Banach space F we frequently write T (µ) to denote the Borel measure µ • T −1 . The Dirac measure based at x ∈ E is denoted by δ x . The Banach space (with respect to the supremum norm) of all bounded Borel measurable functions on E will be denoted B b (E; K), where K is either R or C. If both choices are permitted we simply write B b (E).

Skew convolution of measures and associated skew maps
Let ν be a finite Radon measure on a Banach space E, that is, ν is a finite Borel measure on E with the property that for all ε > 0 there exists a compact set K in E such that ν(E \ K) < ε. Recall that if E is separable, then every finite Borel measure is Radon.
The characteristic function of ν is the mapping ν : E * → C defined by for all x * ∈ E * . The mapping ν is continuous with respect to the topology of uniform convergence on compact subsets of E. More generally, for a measurable function φ : E → R we may define Definition 2.1. Let µ 1 and µ 2 be Radon probability measures on the Banach spaces E 1 and E 2 , respectively, with µ 2 (x * ) = 0 for all x * ∈ E * 2 (e.g. this condition is fulfilled, when µ 2 is infinitely divisible). A Borel mapping T : E 1 → E 2 is called a skew map with respect to the pair (µ 1 , µ 2 ) if there exists a Radon probability measure ρ on E 2 such that T (µ 1 ) * ρ = µ 2 , and we say that µ 2 is the skew-convolution product (with respect to T ) of µ 1 and ρ. If T is also a bounded linear operator between E 1 and E 2 we call it a skew operator with respect to (µ 1 , µ 2 ).
Example 2.3 (Skew Convolution Semigroups). Let (S(t), t ≥ 0) be a C 0 -semigroup on a Banach space E. A skew convolution semigroup is a family (µ t , t ≥ 0) of Radon probability measures on E for which µ s+t = S(t)µ s * µ t for all s, t ≥ 0. Then S(t) is a skew operator with respect to the pair (µ s , µ s+t ), In this case we write P t for the linear operator P S(t) . Then (P t , t ≥ 0) is a semigroup in that P 0 = I and P s+t = P s P t for all s, t ≥ 0, and is called a Mehler semigroup (see e.g. [4,6,10,11,14]). Such objects arise naturally in the study of linear stochastic partial differential equations with additive noise of the form: where A is the infinitesimal generator of (S(t), t ≥ 0) and (L(t), t ≥ 0) is an Evalued Lévy process. If E is a real Hilbert space then it is well-known (see e.g. [1,7] and the recent book [29]) that this equation has a unique mild (equivalently weak) solution (Y (t), t ≥ 0) which is a Markov process given by the generalised Ornstein-Uhlenbeck process: (where the initial condition Y (0) is assumed to be independent of (L(t), t ≥ 0).) Then µ t is the law of the E-valued random variable t 0 S(t−u) dL(u) and (P t , t ≥ 0) is the transition semigroup of (Y (t), t ≥ 0). On a Banach space we may define the stochastic convolution in (2.2) by using integration by parts as in [19]. Quite general necessary and sufficient conditions for solutions to exist to (2.1) (where the stochastic convolution is defined in the sense of Itô calculus) are given in [30]. If X is a Brownian motion, we refer the reader to [25].
Example 2.4 (Operator Self-Decomposable Measures). Let µ be a Radon probability measure on E that takes the form where T is a bounded linear operator on E and ρ is another Radon probability measure on E. Then µ is operator self-decomposable (see [33]) and T is a skew operator with respect to the pair (µ, µ). There has been extensive work on such measures in the case where (2.3) holds with T = S(t) for all t ≥ 0 where (S(t), t ≥ 0) is a C 0 -semigroup on E (see e.g. [1,18,19]). Indeed such measures µ arise as the invariant measures of the Mehler semigroups of Example 2.3 (when these exist -see e.g. [7,14]) and in the case of (2.2), ρ is the law of We call K µ,φ an exponential martingale vector.
Proposition 2.6. Let µ 1 and µ 2 be Radon probability measures on E 1 and E 2 , respectively, with µ 2 (x * ) = 0 for all x * ∈ E * 2 . Let T be a skew map with respect to the pair (µ 1 , µ 2 ). Then for all x * ∈ E * 2 we have Proof. Let ρ denote the associated skew convolution factor. From the identity Fix a Radon probability measure µ on E and let E µ denote the linear span of the set of exponential martingale vectors {K µ,x * ; x * ∈ E * }. The proposition implies that, under the stated hypotheses on µ 1 , µ 2 , and T , the mapping has a well-defined linear extension to a contraction from E µ2 → E µ1 (this extension being also denoted P T ). Under suitable assumptions on the measures one may show that the functions K µ,x * are in fact linearly independent. This fact is of some interest by itself but is not needed here; therefore we have included it in an appendix at the end of this paper. Using the injectivity of the Fourier transform, a standard argument shows that E µ is dense in L p (E, µ; C) for all 1 ≤ p < ∞ (see e.g. [2, Lemma 5.3.1]), and consequently P T is the unique such extension.

Second quantisation: The Gaussian case
In this section we connect, in the Gaussian setting, the notions of skew operators with second quantisation. The presentation is slightly different from the usual one, in that we introduce a form of the chaos expansion that utilises iterated Malliavin derivatives that was introduced by Stroock [32]. This approach will bring out the analogies between the Gaussian and the Poisson case (which we present in the next section) very elegantly.
We begin by recalling some standard results from the theory of Gaussian measures. Proofs and more details can be found in the monographs [5,26,34].
Let µ be a Gaussian measure on the real Banach space E, and let H denote its reproducing kernel Hilbert space, which is defined as follows. The covariance operator Q of µ is given by This integral is known to be absolutely convergent in E and defines a bounded operator Q ∈ L (E * , E) which is positive in the sense that Qx * , x * ≥ 0 for all x * ∈ E * and symmetric in the sense that Qx * , y * = Qy * , x * for all x * , y * ∈ E * . The mapping (Qx * , Qy * ) → Qx * , y * defines an inner product on the range of Q. The real Hilbert space H is defined to be the completion of the range of Q with respect to this inner product. The identity mapping Qx * → Qx * extends to a bounded injective operator j : H → E, and we have the factorisation Q = j • j * . Here we have identified H and its dual via the Riesz representation theorem.
Since j * has dense range in H, the mapping h → φ h uniquely extends to an isometry from H into L 2 (E, µ).
Suppose now that µ 1 and µ 2 are Gaussian Radon measures on Banach spaces E 1 and E 2 , with reproducing kernel Hilbert spaces H 1 and H 2 respectively. In the next two Propositions 3.1 and 3.2 we shall investigate the relationship between linear skew maps from E 1 to E 2 with respect to the pair (µ 1 , µ 2 ) and linear contractions from H 1 to H 2 .
We begin by proving that if T is a skew operator with respect to the pair (µ 1 , µ 2 ), then T restricts to a contraction between the reproducing kernel Hilbert spaces. This result and its proof extend a similar result for semigroup operators in [8,23].
Proof. By assumption we have T µ 1 * ρ = µ 2 for some Radon probability measure ρ. We claim that ρ is Gaussian. Indeed, using the fact that T µ 1 has mean zero, we have Hence, denoting the covariances of µ 1 and µ 2 by Q 1 and Q 2 (respectively), we see that the operator R := Q 2 − T Q 1 T * is positive and symmetric as an operator from E * 2 to E 2 . Since R ≤ Q 2 , a well-known tightness result for Gaussian measures implies that R is the covariance of a Gaussian Radon measureρ on E 2 . The identity T Q 1 T * + R = Q 2 implies T µ 1 * ρ = µ 2 . Since µ 2 is a Gaussian measure, its characteristic function vanishes nowhere and hence, by the observation following Definition 2.1, ρ =ρ. This proves the claim.
Recall that Q 1 = j 1 • j * 1 , where j 1 : H 1 ֒→ E is the canonical inclusion mapping, and likewise we have . Define a linear functional ψ y * on the range of j * 2 by ψ y * (j * 2 x * ) := Q 1 T * x * , y * . If j * 2 x * = 0, then j * 1 T * x * = 0 by (3.1), so ψ y * is well-defined. By (3.2), ψ y * extends to a bounded linear functional on H 2 of norm ≤ j * 1 y * H1 . Identifying ψ y * with an element of H 2 , for all x * ∈ E * we have Hence, T Q 1 y * = j 2 ψ y * and T Q 1 y * H2 ≤ j * 1 y * H1 . Writing Q 1 = j 1 j * 1 we see that the restriction of T | H1 to H 1 maps j * 1 y * to the element j 2 ψ y * of H 1 , and that T | H is contractive on the dense range of j * 1 in H 1 . This gives the result. In the converse direction we have the following result. (1) the image measureT (µ 1 ) is a Gaussian Radon measure; (2) there exists a Gaussian Radon measure ρ on E 2 such thatT (µ 1 ) * ρ = µ 2 .
In particular,T is a linear skew map for the pair (µ 1 , µ 2 ).
Proof. The following facts follows from the general theory of Gaussian measures (see, e.g., [5,13]): (1) the mapping T : H 1 → H 2 admits an extension to a linear Borel mappinḡ T : E 1 → E 2 ; (2) the operator Q = j 2 T T * j * 2 is the covariance of a Gaussian measure µ on E 2 ; (3) µ coincides with the image measureT (µ 1 ). In terms of the covariance operators Q and Q 2 of µ and µ 2 we have Hence the positive symmetric operator R := Q 2 − Q is the covariance of a Gaussian measure ρ for which we haveT (µ 1 ) * ρ = µ * ρ = µ 2 .
Our next objective is to relate the abstract second quantisation procedure of the previous section to the Wiener-Itô decompositions of L 2 (E 1 , µ 2 ) and L 2 (E 2 , µ 2 ).
Following the presentation in [26], for each n ≥ 1 we define H n to be the closed linear subspace of L 2 (E, µ) spanned by the functions H n (φ h ), where h ∈ H has norm one and H n is the n-th Hermite polynomial given by the generating function expansion The Wiener-Itô decomposition theorem asserts that we have an orthogonal direct sum decomposition Let S n be the permutation group on n elements. The range of the symmetrising projection Σ n : H ⊗n → H ⊗n defined by is denoted by H s n and is called the n-fold symmetric tensor product of H. Let (h n ) n≥1 be an orthonormal basis of H (the Hilbert space H, being a reproducing kernel Hilbert space of a Gaussian Radon measure, is separable (see e.g. [5])). Consider the n-fold stochastic integral I n : H s n → H n , defined by with j 1 < · · · < j m and k 1 + · · · + k m = n. Then 1 √ n! I n sets up an isometric isomorphism H s n ≃ H n . Stated differently, the mapping I = As is well known (see e.g. [26]), for all 1 ≤ p < ∞ the linear operator D is closable and densely defined from L p (E, µ) to L p (E, µ; H). From now on we will denote its closure by D as well, and denote the domain of its closure by W 1,p (E, µ). The higher order derivatives D k f : E → H ⊗k are defined recursively by D k f := D(D k−1 f ). These operators are closable as well and the domains of their closures will be denoted by W k,p (E, µ). We define the spaces W ∞,p (E, µ) := k∈N W k,p (E, µ).
The next proposition is due to Stroock [32] in the context of an abstract Wiener space. We give a different proof for Gaussian measures on Banach spaces. We write E µ f = E f dµ. Proof. For each h ∈ H, the function e h : E → R is defined by e h := exp(φ h − 1 2 h 2 H ). It is well known that the linear span of {e h , h ∈ E} is dense in L 2 (E, µ) (see e.g. [26] for a proof). Since for all n ∈ N, the first assertion follows. We clearly have Applying the n-fold stochastic integral and using the generating function identity for the Hermite polynomials we obtain Let us now return to the setting where µ 1 and µ 2 are Gaussian measures on E 1 and E 2 , having reproducing kernel Hilbert spaces H 1 and H 2 , respectively. In order to avoid unnecessary notational complexity we will use the notation D for Malliavin derivatives acting on both L 2 (E 1 , µ 1 ) and L 2 (E 2 , µ 2 ), and define ..,T hn f. Proof. Let us check this first for n = 1. By an easy computation (see [22]), for f ∈ W 1,2 (E 2 , µ 2 ) we have P T f ∈ W 1,2 (E 2 , µ 2 ) and Here, ρ is the measure constructed in Proposition 3.2. The higher order case is proved along similar lines.
In terms of the global derivative, the computation (3.3) shows that E µ1 DP T f = T * E µ2 Df and more generally we have Applying I n to both sides of (3.4) and using Proposition 3.3 together with the density of W ∞,2 (E, µ) in L 2 (E, µ) we have proved: Theorem 3.5. The following diagram commutes: The operator Γ(T * ) := ∞ n=0 (T * ) s n is usually called the symmetric second quantisation of the operator T * .
n! E µ D n by Proposition 3.3, so we may rewrite the commutative diagram in the following equivalent form: Let us finally return to the setting of the previous section and derive the identity (2.4) by the methods of the present section. Fix x * ∈ E * 2 and let h := j * 2 x * . Then K µ2,x * = exp(iφ h − h 2 H2 ) and therefore by Lemma 3.4 (which we apply to the real and imaginary parts of K µ2,x * ), for all g ∈ H 1 we have Applying the n-fold stochastic integral, using Proposition 3.3 (again considering real and imaginary parts separately), and using the (analytic extension of the) generating function identity for the Hermite polynomials, we obtain

Second quantisation: the Poisson random measure case
We proceed with a similar result in the case where µ 1 and µ 2 are infinitely divisible measures of pure jump type. For this we need do delve a bit deeper into the structure of such measures and develop their connection with Poisson random measures.
Let (Y, Y , ν) be a σ-finite measure space and let N(Y ) denote the set of all Nvalued measures on Y . We endow this space with the σ-algebra σ(Y ) generated by Y , that is, the smallest σ-algebra which renders the mappings ξ → ξ(B) measurable for all B ∈ Y .
Let (Ω, F , P) be a probability space and Π be a Poisson random measure having intensity measure ν. We denote by P Π the distribution of Π, that is, P Π is the probability measure on (N(Y ), σ(Y )) given by Following Last and Penrose [20], for a measurable function f : N(Y ) → R and y ∈ Y we define the measurable function D y f : The function D n y1,...,yn f : N(Y ) → R is defined recursively by D n y1,...,yn f = D yn D n−1 y1,...,yn−1 f, for y 1 , . . . , y n ∈ Y . This function is symmetric, i.e. it is invariant under any permutation of the variables.
We have a canonical isometry L 2 s (Y n ) = (L 2 (Y )) s n , where the former denotes the closed subspace of L 2 (Y n ) comprised of all symmetric functions. We set (Ω) we denote the n-fold stochastic integral associated with Π as defined in [20]. We note that part (3) of the Last-Penrose theorem is essentially a Stroock formula for Poisson measures (cf. Proposition 3.3) and that a version of this result for a class of real-valued Lévy processes may be found in [12]. (1) For all n ∈ N, y 1 , . . . , y n ∈ Y , and where τ n f (y 1 , . . . , y n ) := ED n y1,...,yn f (Π).
From this point on, we shall consider the special case Y = E, where E is a separable real Banach space. We use the shorthand notation where Π is the compensated Poisson random measure, and ν is now assumed to be a Lévy measure on E (see e.g. [17,21] for the definition). We will need to use the Lévy-Itô decomposition for Banach space-valued Lévy processes, as established in [30], and the next lemma is key in that regard. Since M and Π are identically distributed (both being Poisson random measures with intensity measure ν), this proves the lemma.
We will be interested in Borel probability measures µ on E which arise as the distribution of E-valued random variables X of the form where Π is a Poisson random measure on E whose intensity measure ν is a Lévy measure and ξ ∈ E is a given vector. The interest of such random variables comes from the Lévy-Itô decomposition for E-valued Lévy processes, which asserts that if (L(t)) t≥0 is a Lévy process without Gaussian part, then L(1) is precisely of this form (see [30]). Note that µ is a Radon measure (since every Borel measure on a separable Banach space is Radon) and infinitely divisible. In particular, its characteristic function vanishes nowhere.
We have the following analogue of Lemma 3.4: Proof. Suppose the random variable R :Ω → E 2 , defined on an independent probability space (Ω,F ,P ), has distribution ρ. Then using the fact that T X 1 + R and X 2 are identically distributed, For the higher derivatives we use a straightforward inductive argument.
Below we will think of the left and right hand side of (4.3) as symmetric functions on E n . As such, the identity will be written as where (g • T s n )(y 1 , . . . , y n ) := g(T y 1 , . . . , T y n ).
Define, for k = 1, 2, the operators j k : The rigorous interpretation of this identity is provided by noting that which means that j k f (η) is well-defined for P Π -almost all η and that j k establishes an isometry from L 2 (E k , µ k ) into L 2 (P Π k ). Note that and and therefore, for all g ∈ L 2 (E k , µ k ), (τ n k • j k )g = ED n j k g(Π k ) = Ej kD n g(Π k ) = ED n g(X k ) = E µ kD n g.
Using this identity in combination with Lemma 4.3, for all f ∈ L 2 (E 2 , µ 2 ) we obtain When combined with the contractivity of P T and the surjectivity of τ (see Theorem 4.1), this identity implies that the mapping f → f • T s n is a is a linear contraction from L 2 s (E n 2 , ν n 2 ) to L 2 s (E n 1 , ν n 1 ). In summary we have proved the following theorem.
, and the following diagram commutes: To make the connection with the commuting diagram in the Gaussian case, which features the n-fold stochastic integrals rather than n-fold derivatives, we note that by Theorem 4.1 the following diagram commutes as well for k = 1, 2: Theorem 4.4 is a generalisation of the result obtained by Peszat [28] in the case where µ 1 = µ 2 is an invariant measure associated with a Mehler semigroup on a Hilbert space E 1 = E 2 .
As we did in the previous section, we wish to make the link with the results on skew operators. In principle we could repeat the Gaussian computation at the end of Section 3, but this requires the evaluation of a rather intractable Poisson stochastic integral. There is, however, a simpler argument.
Remark 4.5. The results of Sections 3 and 4 suggest the problem of extending the theory to that case where µ 1 and µ 2 are arbitrary infinitely divisible measures. We conjecture that Theorems 3.5 and 4.4 extend to this more general framework.
Appendix A. Linear independence of the functions K µ,x * The support of a Radon measure µ on E is the complement of the union of all open µ-null sets in E. We denote the support of µ by supp(µ) and its closed linear span by E µ . We say that µ has linear support if supp(µ) = E µ . The proof of the next result uses a variant of a standard technique of reduction to a system of linear equations that can be found in [16, pp. 20-21] or [27, pp. 126-7].
Proposition A.1. Suppose that µ has linear support and let F ⊆ E * be such that its points are separated by E µ . Then the family {x → exp(i x, x * ); x * ∈ F } is linearly independent in L 2 (E, µ).
Proof. Let x * 1 , . . . x * N ∈ F be distinct linear functionals and let c 1 , . . . , c N ∈ R be such that N n=1 c n exp(i ·, x * n ) = 0 in L 2 (E, µ). In particular, the set G of all x ∈ E µ such that N n=1 c n exp(i x, x * n ) = 0 has full measure. By the assumption on µ, every open set V which intersects E µ has positive µ-measure and therefore intersects G in a set of positive µ-measure. It follows that G is dense in E µ . But then, by continuity, we find that G = E µ , that is, N n=1 c n exp(i x, x * n ) = 0 for all x ∈ E µ . Hence N n=1 c n exp(it x, x * n ) = 0 for all t ∈ R and x ∈ E µ . For r = 0, 1, 2, . . . , N differentiate r times with respect to t (where 1 ≤ r ≤ n − 1) and then put t = 0. This yields N n=1 c n x, x * n r = 0 for all x ∈ E µ . We thus obtain a system of N linear equations in c 1 , . . . c N and it has a non-zero solution if and only if 1 1 · · · 1 x, x * 1 x, x * 2 · · · x, x * N · · · · · · · · · · · · · · · · · · · · · · · · x, x * 1 N −1 x, x * Hence for each x ∈ E there exist 1 ≤ m, m ≤ N such that x, x * m − x * n = 0. The choice of m and n here depends on x. We now prove that in fact they are independent of the choice of vector x ∈ E µ . To this end for each 1 ≤ m, n ≤ N define F mn := {x ∈ E µ : x, x * m −x * n = 0}. Then F mn is closed and N m,n=1 F mn = E µ . By the Baire category theorem, at least one pair (m, n) must be such that F mn has non-empty interior O mn in E µ . Let (m 0 , n 0 ) be such a pair and fix x 0 ∈ O m0n0 . Then by linearity x − x 0 , x * m0 − x * n0 = 0 for all x ∈ O m0n0 . In other words y, x * m0 − x * n0 = 0 for all y ∈ O m0n0 − {x 0 }. But O m0n0 − {x 0 } contains an open neighbourhood of 0 in E µ and hence by linearity again, x, x * m0 − x * n0 = 0 for all x ∈ E µ , contradicting our assumptions. So we must have c 1 = · · · = c N = 0.