On the orthogonality of zero-mean Gaussian measures: Sufficiently dense sampling

For a stationary random function $\xi$, sampled on a subset $D$ of $\mathbb{R}^{d}$, we examine the equivalence and orthogonality of two zero-mean Gaussian measures $\mathbb{P}_{1}$ and $\mathbb{P}_{2}$ associated with $\xi$. We give the isotropic analog to the result that the equivalence of $\mathbb{P}_{1}$ and $\mathbb{P}_{2}$ is linked with the existence of a square-integrable extension of the difference between the covariance functions of $\mathbb{P}_{1}$ and $\mathbb{P}_{2}$ from $D$ to $\mathbb{R}^{d}$. We show that the orthogonality of $\mathbb{P}_{1}$ and $\mathbb{P}_{2}$ can be recovered when the set of distances from points of $D$ to the origin is dense in the set of non-negative real numbers.


Introduction 1.Primary notation
We use the notation N * := N \ {0} and R + := [0, ∞) for the set of non-zero natural numbers and non-negative real numbers, respectively.Given x ∈ R d , the Euclidean norm of x is identified with x := x, x , where x, y := x t y, x, y ∈ R d , is the dot product on R d .An open ball of radius r ∈ R + , centered at x ∈ R d , is denoted with B r (x).Finally, the Borel σ-algebra on R d is written as B(R d ).

On the equivalence and orthogonality of Gaussian measures with different covariance functions
Conditions for the equivalence or orthogonality of the Gaussian measures P 1 and P 2 on σ D (ξ) are discussed in Chapter III of [5].Other common references are [3] (Chapter VII) or also [12] (Chapter III).Let us recall some terminology.P 2 is said to be absolutely continuous with respect to P 1 on σ D (ξ) if P 1 (A) = 0 implies P 2 (A) = 0, A ∈ σ D (ξ).P 1 and P 2 are termed equivalent if they are mutually absolutely continuous.On the other hand, P 1 and P 2 are referred to as orthogonal on σ D (ξ) if there exist A ∈ σ D (ξ) for which P 1 (A) = 0 but P 2 (A) = 1.
Orthogonal measures are denoted by P 1 ⊥ P 2 .Using Lebesgue's decomposition theorem, it can be shown that P 1 ⊥ P 2 on σ D (ξ) if there exists (A n ) ⊂ σ D (ξ) such that P 1 (A n ) − → 0 but P 2 (A n ) − → 1 as n − → ∞ (compare to p. 64 in [5]).Further, it is well known that Gaussian measures are either equivalent or orthogonal (see for instance Theorem 1 in Chapter III of [5]).
Throughout this article we assume that ξ is stationary (homogeneous) under P 1 and P 2 .That is, for ℓ = 1, 2, ξ has constant mean function µ ℓ and covariance function c ℓ that depends only on the difference x − y, x, y ∈ R d .Furthermore, the following two items are assumed to be satisfied: (ii) for ℓ = 1, 2, there exists a finite measure F ℓ , uniquely defined on B(R d ), such that

Recall that if for any x
ξ is said to be continuous in mean-square (m.s.continuous) under P ℓ .We remark that under the assumption of stationarity, (ii) is satisfied when ξ is m.s.continuous under both P 1 and P 2 (see Theorem 2 in Section 2 of Chapter IV in [3]).Further, since for ℓ = 1, 2, ξ is stationary under P ℓ , there exists k ℓ : R d → R such that for any x, y ∈ R d , c ℓ (x, y) = k ℓ (x − y).Then, see p. 208 in the latter reference, ξ is m.s.continuous under P ℓ if and only if k ℓ is continuous at zero.
Chapter III.4.2 of [5] gives an overview of the case where Gaussian measures differ only in terms of their covariance function.In particular, the equivalence or orthogonality of P 1 and P 2 on σ D (ξ) is linked to the difference between the covariance functions c 1 and For further reference, the following assumption is needed.Assumption 1.1 (Absolute continuity of spectral measures).For ℓ = 1, 2, the spectral measure F ℓ is absolutely continuous with respect to the Lebesgue measure on B(R d ) with spectral density f ℓ (λ) = F ℓ (dλ)/dλ.Remark 1.1.It follows from (ii) that for ℓ = 1, 2, the spectral density f ℓ must be non-negative a.e. with respect to the Lebesgue measure on R d .To see it, we recall that because of (ii) there exists a stochastic orthogonal measure ζ ℓ , defined on B(R d ), with structure function F ℓ , s.t. for any x ∈ R d we have that P ℓ a.s., See for instance Theorem 1 in Section 5 of Chapter IV in [3].Then, if we let which shows that A must have Lebesgue measure zero.
The following result serves as an anchor point for our study.For a proof, we refer to Section III.4.2 in [5] (see Theorem 11).
Theorem 1.1.Suppose that Assumption 1.1 is satisfied where f 1 and f 2 are bounded on R d .Then, the Gaussian measures P 1 and P 2 are equivalent on σ D (ξ) if and only if the restriction δ satisfies the following properties: Note that the proof given in [5] is based on random functions that have sample paths defined on R instead of R d .Nevertheless, the arguments proposed for the case d = 1 can be recycled to prove the case where d > 1.We also remark that the existence of the spectral density f ℓ is guaranteed if k ℓ (x) is absolutely integrable on R d (see for instance p. 211 in [3]).Theorem 1.1 allows us to easily deduce the orthogonality of P 1 and P 2 when D is chosen to be the entire R d .In particular, under the assumption of bounded spectral densities, Theorem 1.1 shows that if f 1 and f 2 differ on a set of positive Lebesgue measure, P 1 and P 2 must be orthogonal on σ(ξ) (see for instance p. 95 in [5]).Using the two-dimensional Hankel transform ( [8]) we will give the analog of Theorem 1.1 when ξ is isotropic under P 1 and P 2 (Theorem 2.3).Recall that ξ is isotropic under P ℓ if k ℓ (x) is a function of x only.In a further step, we aim to recover the orthogonality of P 1 and P 2 when D is dense in R d (respectively D + := { x : x ∈ D} is dense in R + ).Specifically, we will prove that if the covariance functions c 1 and c 2 are uniformly continuous on R d × R d , the density of D in R d is enough to obtain orthogonal measures P 1 and P 2 (Theorem 2.5) -this under the assumption that f 1 and f 2 are bounded and different on a set of positive Lebesgue measure.If ξ is isotropic, we show that one arrives at the same conclusion if D + is dense in R + (Theorem 2.6).The latter result allows us to easily deduce the orthogonality of P 1 and P 2 when ξ is sampled along a continuous and unbounded path which starts at zero.As an example, we will deduce the almost sure orthogonality of P 1 and P 2 when ξ is sampled along a d-dimensional Brownian motion which starts at the origin (Example 3.1).In particular, Theorems 2.5 and 2.6 complement the results given in [5] by deducing the orthogonality of two Gaussian measures based on a countable and unbounded collection of points sampled in R d .In terms of an illustration, we will revisit the relationship between orthogonal families of Gaussian distributions and covariance parameter estimation ( [1]) and discuss conditions under which consistent estimators can be obtained (Theorem 4.1).

Continuous extension
We use Theorem 1.1 as a starting point and note that with regard to the necessity of the imposed conditions, a slight modification is possible.In particular, the extension ↑ δ of item (a) in Theorem 1.1 can be chosen to be continuous.This follows from the fact that the measures F 1 and F 2 are finite.More explicitly, from Theorem 8 in Section III.3 of [5] we can see that the Gaussian measures P 1 and P 2 are equivalent on σ D (ξ) if and only if the restriction δ permits a representation Then, since F 1 and F 2 are finite, by Hölder's inequality, Thus, since f 1 and f 2 are non-negative a.e.(see Remark 1.1), the function Ψ(λ, µ)f 1 (λ)f 2 (µ) is absolutely integrable on R d × R d .Therefore, its Fourier transform ↑ δ is continuous and absolutely integrable on R d × R d .But, since f 1 and f 2 are also bounded, we have that ↑ δ is also square-integrable on R d × R d .Further, on D × D, ↑ δ agrees with δ.Hence, we have proven the following result: Theorem 2.1.Suppose that Assumption 1.1 is satisfied where f 1 and f 2 are bounded on R d .Then, if the Gaussian measures P 1 and P 2 are equivalent on σ D (ξ), there exists a continuous extension ↑ δ of δ to the entire R d × R d which is absolutely and square-integrable on R d × R d .
We will see later that the above relation between equivalent measures and continuous extensions of δ, will allow us to easily read the orthogonality of P 1 and P 2 from the uniform continuity of c 1 and c 2 if D is dense in R d .Before we arrive there, we establish analogous versions of Theorems 1.1 and 2.1, when ξ is not only stationary but also isotropic.

Isotropic random fields
) we consider a real orthonormal basis composed of spherical (surface) harmonics S l m , l = 1, . . ., h(m, d) of degree m ∈ N (see Chapter XI, Section 11.3 in [2] or also Chapter IV, Section 2 of [9]).We recall that Under Assumption 2.1, since (ii) of Section 1.3 is satisfied, we have, for ℓ = 1, 2, and any x, y ∈ R d (see (4.145) of [13]), where We consider the real vector space of sequences of functions a(κ h(m, d), m ∈ N. On the latter vector space, we introduce the inner product Further, we define L 0 D as the linear span over R of the set of sequences of functions and introduce the correspondence Let L D (P ℓ ) be the linear span of {ξ x : x ∈ D} over R under P ℓ , ℓ = 1, 2. Given ℓ = 1, 2, we view L D (P ℓ ) as a subspace of the inner product space L 2 (Ω, F , P ℓ ).Then, we readily see that for any a, b Thus, for ℓ = 1, 2, (3) provides an isometric correspondence between the inner product space L 0 D (equipped with •, • Φ ℓ ) and L D (P ℓ ).Note that if we define L D (Φ ℓ ), ℓ = 1, 2, as the closure of L 0 D with respect to •, • Φ ℓ , the isometric correspondence (3) can be extended to an isometric correspondence between L D (Φ ℓ ) and the closure of L D (P ℓ ).
Suppose that there exists a ∈ L 0 D such that a Φ1 = 0 but a Φ2 = 0.Then, the Gaussian measures P 1 and P 2 are orthogonal on σ D (ξ).This is because ξ is Gaussian with a zero-mean function.Hence the latter assumption allows us to conclude that We recall that the two norms Then, it can be shown that the Gaussian measures P 1 and P 2 are orthogonal on σ D (ξ) if the condition ( 5) is violated (compare to Section III.1.3 in [5]).Notice that if ( 5) is satisfied, then by construction of L D (Φ 1 ) and L D (Φ 2 ), (5) remains true with L 0 D replaced with either L D (Φ 1 ) or L D (Φ 2 ).The following lemma is of central importance.
Lemma 2.2.Suppose that Assumption 2.1 is satisfied.Then, the Gaussian measures P 1 and P 2 are equivalent on σ D (ξ) if and only if for any pair m, i ∈ N, l = 1, . . ., h(m, d), j = 1, . . ., h(i, d), the difference δ l,j m,i (r x , r y ) := is representable as Proof.We define the functions Then, as on p. 82 of [5], we let L 0 D×D denote the linear space of functions of the form β pq e xp,yq (λ, µ), where x p , y q ∈ D and β pq are real coefficients.Further, the Hilbert space L D×D (F 1 × F 2 ) is defined as the closure of L 0 D×D with respect to the inner product Using polar coordinates, we introduce the class of sequences of functions where for m, i ∈ N, l = 1, . . ., h(m, d), j = 1, . . ., h(i, d), the entries of a x,y (κ, ι) are given by .
Then, similar to L 0 D×D , we define L 0 D×D as the space of sequences of functions which are of the form where x p , y q ∈ D and β pq are real coefficients.Finally, we define L D×D (Φ 1 × Φ 2 ) as the closure of L 0 D×D with respect to the inner product Using (4), we observe that e xp,yq , e x,y F1×F2 = a xp,yq , a x,y Φ1×Φ2 .
According to Theorem 8 in Section III.3 of [5], the Gaussian measures P 1 and P 2 are equivalent on σ D (ξ) if and only if the restriction δ permits a representation δ(x, y) = Ψ, e x,y F1×F2 , . Hence, we have shown that P 1 and P 2 are equivalent on σ D (ξ) if and only if δ is representable as where Then, we can conclude the proof using the orthonormality property of the spherical harmonics.
Notice, if Assumptions 1.1 and 2.1 are satisfied, then, for ℓ = 1, 2, since k ℓ is the Fourier transform of f ℓ , and k ℓ is assumed to be radial, we must conclude that f ℓ is radial itself.We denote its radial version with g ℓ , i.e., f ℓ (x) = g ℓ ( x ).The analog to Theorem 1.1 for isotropic random functions reads as follows: Theorem 2.3.Suppose that Assumptions 1.1 and 2.1 are satisfied where f 1 and f 2 are bounded on R d .Then, the Gaussian measures P 1 and P 2 are equivalent on σ D (ξ) if and only if, for any pair m, i ∈ N, l = 1, . . ., h(m, d), j = 1, . . ., h(i, d), the scaled difference satisfies the following properties: (a) There exists extension Proof.We first notice that for any b ≥ 0, by Assumption 1.1, Since for ℓ = 1, 2, f ℓ is radial, we use spherical coordinates and get Assume that P 1 and P 2 are equivalent on σ D (ξ).We apply Lemma 2.2 and conclude that (ιr 2 )dκdι, whenever r 1 , r 2 ∈ D + , for some ψ l,j m,i which satisfies (7).Notice that since f 1 and f 2 are real valued and bounded on R d , the respective radial versions g 1 and g 2 must be real valued and bounded on R + .Thus, if we define , together with (9), we have that Therefore, we define ↑ d l,j m,i on R + × R + , as the two-dimensional Hankel transform of h l,j m,i (κ, ι), which is squareintegrable (see Corollary 6.1 in [8]).This then proves (a) of Theorem 2.3.In addition, by (7), also (b) of Theorem 2.3 must be satisfied.To prove the other direction, suppose that (a) and (b) of Theorem 2.3 are satisfied.Then by (a), since ↑ d l,j m,i is square-integrable on R + × R + , the two-dimensional Hankel transform hl,j m,i of exists and is square-integrable on R + × R + .Therefore, on D + × D + , we have that We set , and obtain Finally, we can conclude that P 1 and P 2 are equivalent on σ D (ξ) under application of Lemma 2.2.
Similar to Theorem 2.1, the equivalence of P 1 and P 2 allows for an extension of δ l,j m,i that is continuous.To arrive there, we introduce the set Theorem 2.4.Suppose that Assumptions 1.1 and 2.1 are satisfied where f 1 and f 2 are bounded on R d .Then, if the Gaussian measures P 1 and P 2 are equivalent on σ D (ξ), for any pair m, i ∈ N, l = 1, . . ., h(m, d), j = 1, . . ., h(i, d), there exists a continuous extension ↑ δ l,j m,i of δ l,j m,i , from Proof.Let us fix a pair m, i ∈ N, l = 1, . . ., h(m, d), j = 1, . . ., h(i, d).From the proof of Theorem 2.3 we see that the equivalence of P 1 and P 2 on σ D (ξ) implies that We recall that ψ l,j m,i is square-integrable with respect to As with the proof of Theorem 2.1, since Φ 1 and Φ 2 are bounded, we conclude that as well.Then, for any pair (r We remark that for fixed κ, ι ∈ R + , (r 1 , r 2 ) → l,j m,i W r1,r2 (κ, ι) is continuous on R + × R + \ N .Then, using Lommel's expression for the Bessel function of the first kind (see Section 3.3 of [11]), we estimate Hence, if one applies a similar estimate to J i+(d−2)/2 (ιr 2 ), we obtain that , where C is some fixed constant, independent of r 1 , r 2 and κ, ι.In addition, since the functions z → √ zJ m−1/2 (z) and z → √ zJ i−1/2 (z) are bounded on (0, ∞) (see [4]), we get for any (r where C is independent of r 1 , r 2 and κ, ι.The function on the right-hand side of the latter equation is absolutely integrable because of (10).Hence, we set and obtain an extension of δ l,j m,i from D + ×D + \N to R + ×R + \N , which is continuous by Lebesgue's dominated convergence theorem.Then, using the same reasoning as in the proof of Theorem 2.3, since f 1 and f 2 are assumed to be bounded, we extend to the entire R + × R + by means of the two-dimensional Hankel transform of a square-integrable function.Finally, since N has Lebesgue measure zero, ( 11) is square-integrable on R + × R + .

Sufficiently dense sampling
Upon Theorem 2.1, the next result relates the uniform continuity of the covariance functions c 1 and c 2 with the orthogonality of the Gaussian measures P 1 and P 2 when D is dense in R d .Proof.We recall that on R d × R d , the difference c 1 − c 2 is given by Since {λ ∈ R d : f 1 (λ) = f 2 (λ)} has positive Lebesgue measure, (12) can not be square-integrable on To see it, we recall that ξ is stationary and observe that Then, as the later integral is constant in y, we must have that unless the L 2 norm of the difference k 1 − k 2 is zero.But, since the Fourier transform is an isometry on L 2 (R d ), and f 1 − f 2 is assumed to have non-zero L 2 norm, this case is not possible.Thus, ( 12) is not square-integrable.Still, by assumption, it is continuous on R d × R d .Furthermore, since δ is assumed to be uniformly continuous on D × D and D is dense in R d , any continuous extension of δ to R d × R d must be given by ( 12).This concludes the proof under application of Theorem 2.1.
If ξ is also isotropic, the density of D + in R + is sufficient to recover the orthogonality of the Gaussian measures P 1 and P 2 on σ D (ξ).Proof.First of all, using (2), we see that ↑ * δ m must be uniformly continuous on R + × R + .This follows from the fact that .
See for instance (b) of Corollary 2.9 in [9].In particular, ↑ * δ m is a continuous extension of δ l,l m,m from D + ×D + \N to R + × R + \ N .But, because of the assumption that {λ ∈ R d : f 1 (λ) = f 2 (λ)} has positive Lebesgue measure, it can not be the case that To see this, we use the identity and note that Then, using the same reasoning as in the proof of Theorem 2.5, the latter integral is not finite.Therefore, we found a pair m = i for which ↑ * δ m is a continuous extension of δ l,l m,m from must be given by ↑ * δ m .This concludes the proof under application of Theorem 2.4.
From the proofs of Theorems 2.5 and 2.6, it becomes obvious that the assumption that {λ ∈ R d : f 1 (λ) = f 2 (λ)} has positive Lebesgue measure can be replaced with the assumption that {x ∈ R d : k 1 (x) = k 2 (x)} has positive Lebesgue measure.Notice also that if D is dense in R d , then clearly D + is dense in R + .Of course, the converse is not true.We will see a particular example in the next section.We further remark that if ξ is measurable with respect to some larger σ-algebra G, i.e., U ⊂ G, then Theorems 2.5 and 2.6 give sufficient conditions to deduce the orthogonality of P 1 and P 2 on {ξ −1 (G) : G ∈ G}.This is because the orthogonality of P 1 and P 2 on {ξ −1 (G) : G ∈ G} follows from the orthogonality of P 1 and P 2 on σ D (ξ).To finish this section, we have a closer look at a well-known family of covariance functions.[13])

Stochastic sampling
Let {X t : t ∈ T }, T ⊂ R, be a stochastic process defined on a probability space (Ω X , F X , P X ), taking values in R d .That is, we consider a random function ω X → X(ω X ) with R d valued sample paths defined on T .We assume that (Ω X , F X , P X ) is complete in the measure theoretic sense.Further, X starts from the origin, i.e., X t0 (ω X ) = 0, ω X ∈ Ω X , and has continuous sample paths.For a given ω X ∈ Ω X , X[T ](ω X ) denotes the image of X(ω X ), i.e., x ∈ X[T ](ω X ) if and only if x = X t (ω X ) for some t ∈ T .For now, we assume that ξ introduced in Section 1.2 is observed along sample paths restricted to X[T ](ω X ), ω X ∈ Ω X .Explicitly, we consider two Gaussian measures P 1 and P 2 on σ X[T ](ωX ) (ξ) which differ only in their covariance functions c 1 and c 2 .To adapt the notation of the previous sections, we put X[T ] + (ω X ) := { x : x ∈ X[T ](ω X )}, ω X ∈ Ω X .Following Theorem 2.5 we obtain: Corollary 3.1.Suppose that the assumptions of Theorem 2.5 are satisfied with c 1 and c 2 uniformly continuous on i.e., the Gaussian measures P 1 and P 2 are orthogonal on σ X[T ] (ξ) P X a.s.
We note that the set {X[T ] is dense in R d } is a member of F X since X takes values in R d and has continuous sample paths on T .Further, since (Ω X , F X , P X ) is assumed to be complete, under the assumptions of Corollary 3.1, {P 1 ⊥ P 2 on σ X[T ] (ξ)} is a member of F X as well.Similarly, for the isotropic case, the next result is deduced from Theorem 2.6.Corollary 3.2.Suppose that the assumptions of Theorem 2.6 are satisfied with c 1 and c 2 uniformly continuous on As we have assumed that X starts from the origin, X[T ] + is actually equal to R + , with P X probability one, if X has sample paths that are almost surely unbounded.This is summarized in the following lemma: For the other direction, since the sample paths of X are continuous, we have that for any ω X ∈ Ω X , X[T ](ω X ) is path-connected.Consider any r ∈ R + and the neighborhood B r (0) around the origin.Since ω X ∈ {X[T ] is unbounded}, we must conclude that there exists v such that v ∈ X[T ](ω X ) \ B r (0).But since X[T ](ω X ) is path-connected, there also exists v ′ such that v ′ ∈ ∂B r (0) ∩ X[T ](ω X ).Therefore we have that v ′ = r, which shows that R + ⊂ X[T ] + (ω X ).
Using Lemma 3.3, we have proven the following theorem: Theorem 3.4.Suppose that the assumptions of Theorem 2.6 are satisfied with c 1 and c 2 uniformly continuous on R d × R d .Then, if X[T ] is unbounded P X a.s., we have Example 3.1 (Gaussian random fields sampled along Brownian paths).It is well known that if we let X = B, with T = R + , a d-dimensional Brownian motion starting from the origin, then for d ≥ 2, the Lebesgue measure of B[R + ] is zero with P B probability one.This is shown in Théorème 53, p. 240, of [7] for the case where d = 2.A more recent proof, for the general case (d ≥ 2), is given in the second paragraph of p. 197 in [6].Still, it is true that for d = 1, 2, B[R + ] is dense in R, R 2 , respectively, with P B probability one (see Propositions 2.14 and 7.16 in [6]).For the case where d ≥ 3, we know that P B a.s.lim t→∞ B t = ∞ (see Theorem 7.17 in [6]).Further, for any d ≥ 1, the sample paths of B are continuous (see Definitions 2.12 and 2.24 of [6]).In conclusion, given any d ≥ 1, under sufficient conditions on c 1 and c 2 (see Corollary 3.1 and Theorem 3.4), with P B probability one, Gaussian measures P 1 and P 2 are orthogonal on σ B[R+] (ξ), i.e., when ξ is sampled along the paths of B.

Inference on random fields
In this section, we let D = {x i : i ∈ N * } be a fixed sequence of coordinates in R d .That is, we consider Let Θ ⊂ R p .Suppose that P θ , θ ∈ Θ, is a family of Gaussian measures defined on σ D (ξ).We remain in the setting of Section 1.3, i.e., for any two θ 1 , θ 2 ∈ Θ, under P θ1 and P θ2 , ξ is stationary and items (i) and (ii) are satisfied.Suppose that there exists θ 0 ∈ Θ such that the true distribution of ξ is obtained from P θ0 .It is further assumed that there exists a neighborhood of θ 0 in Θ.In the framework of parameter estimation, θ 0 is treated as the unknown and Θ is regarded as the parameter space.A maximum likelihood (ML) estimator for θ 0 is defined to be any sequence of random variables ( θn ), which is such that for any n ∈ N * , The function θ → p n (θ)(ω) is called the likelihood function, the probability density function of Y n , regarded as a function of θ.The sequence ( θn ) is said to be strongly consistent for θ 0 if We say that the family P θ , θ ∈ Θ, is a family of orthogonal Gaussian measures on σ D (ξ) if for any two θ 1 , θ 2 ∈ Θ, with θ 1 = θ 2 , P θ1 and P θ2 are orthogonal on σ D (ξ).The next result is inspired by [14] (see the proof of Theorem 3).
Theorem 4.1.Let Θ be closed and convex.Assume that a ML estimator for θ 0 exists and is continuous on Θ. Suppose that there exists N ∈ N * such that for any n ≥ N and ω ∈ Ω, θ → ϕ n (θ)(ω) is log-concave on Θ.Then, if P θ , θ ∈ Θ, is a family of orthogonal Gaussian measures on σ D (ξ), ( θn ) is strongly consistent.
Proof.First of all, since P θ , θ ∈ Θ, is a family of orthogonal Gaussian measures on σ D (ξ), we have that with P θ0 probability one, (ϕ n (θ)) converges pointwise to zero whenever θ = θ 0 , i.e., This follows from the fact that the sequence (ϕ n (θ)) forms a martingale on (Ω, σ D (ξ), P θ0 ) with respect to the filtration {σ(Y n ) : n ∈ N * } (see Theorem 1, p.442 in [3]).Given ε > 0, let B ε (θ 0 ) be a neighborhood of θ 0 contained in Θ.We show that with P θ0 probability one, In particular, (15) shows that Given an ML estimator for θ 0 , we remark that the assumptions on ϕ n (θ)(ω) given in Theorem 4.1 are satisfied if the Hessian matrix of the log-likelihood function θ → log(p n (θ))(ω) becomes negative semidefinite for n large enough.As a simple illustration we take Θ = [a, b], 0 < a < b < ∞, and consider a family P σ 2 , σ 2 ∈ [a, b], defined upon the exponential family (13), with known scale parameter α 0 and unknown variance parameter σ 2 .We readily see that the second derivative of the log-likelihood function is negative.Then, if the sequence of coordinates D is such that D + is dense in R + , we have seen in Example 2.1 that the family P σ 2 , σ 2 ∈ [a, b], is a family of orthogonal Gaussian measures on σ D (ξ).In this case, we can apply Theorem 4.1, and deduce that variance ML-estimators (σ 2 n ) for the exponential family are strongly consistent.This adds to the results of [14], in which the sampling domain is assumed to be bounded.In particular, if we remain in the setting of Example 3.1 and consider a d-dimensional Brownian motion B starting from zero, we have that B[R + ∩ Q] + is dense in R + with P B probability one.Thus, if we assume that ξ is sampled along B[R + ∩Q], a sequence of variance ML-estimators (σ 2 n ) is strongly consistent with P B probability one.

Theorem 2 . 5 .
Suppose that Assumption 1.1 is satisfied where f 1 and f 2 are bounded on R d and such that the set {λ ∈ R d : f 1 (λ) = f 2 (λ)} has positive Lebesgue measure.Then, if c 1 and c 2 are uniformly continuous on R d × R d and D is dense in R d , the Gaussian measures P 1 and P 2 are orthogonal on σ D (ξ).

Theorem 2 . 6 .
Suppose that Assumptions 1.1 and 2.1 are satisfied where f 1 and f 2 are bounded on R d and such that the set {λ ∈ R d : f 1 (λ) = f 2 (λ)} has positive Lebesgue measure.Then, if c 1 and c 2 are uniformly continuous on R d × R d and D + is dense in R + , the Gaussian measures P 1 and P 2 are orthogonal on σ D (ξ).