An infinite-dimensional helix invariant under spherical projections

We classify all subsets $S$ of the projective Hilbert space with the following property: for every point $\pm s_0\in S$, the spherical projection of $S\backslash\{\pm s_0\}$ to the hyperplane orthogonal to $\pm s_0$ is isometric to $S\backslash\{\pm s_0\}$. In probabilistic terms, this means that we characterize all zero-mean Gaussian processes $Z=(Z(t))_{t\in T}$ with the property that for every $s_0\in T$ the conditional distribution of $(Z(t))_{t\in T}$ given that $Z(s_0)=0$ coincides with the distribution of $(\varphi(t; s_0) Z(t))_{t\in T}$ for some function $\varphi(t;s_0)$. A basic example of such process is the stationary zero-mean Gaussian process $(X(t))_{t\in\mathbb R}$ with covariance function $\mathbb E [X(s) X(t)] = 1/\cosh (t-s)$. We show that, in general, the process $Z$ can be decomposed into a union of mutually independent processes of two types: (i) processes of the form $(a(t) X(\psi(t)))_{t\in T}$, with $a: T\to \mathbb R$, $\psi(t): T\to \mathbb R$, and (ii) certain exceptional Gaussian processes defined on four-point index sets. The above problem is reduced to the classification of metric spaces in which in every triangle the largest side equals the sum of the remaining two sides.


Introduction and main results
1.1. Introduction. In the present paper, we shall be interested in the stationary Gaussian process X = (X(t)) t∈R with zero mean and covariance function E[X(s)X(t)] = 1 cosh(t − s) , s, t ∈ R.
This process appeared in the literature [8,7,3,1,2,9,4]  By comparing the covariance functions it is easy to check that both processes are essentially time-changes of X, namely f (tanh t) cosh t t∈R f.d.d.
= denotes the equality of finite-dimensional distributions. If Z = (Z(t)) t∈T denotes any of the processes X, f , g introduced above, then the following remarkable property holds: For every s 0 ∈ T , the conditional distribution of (Z(t)) t∈T given that Z(s 0 ) = 0 coincides with the distribution of (ϕ(t; s 0 )Z(t)) t∈T for a suitable function ϕ(t; s 0 ). So, the law of the conditioned process is the same as the law of the original process up to multiplication by some function. Specifically, in the case of the process X, for every s 0 ∈ R, the law of (X(t)) t∈R conditioned on X(s 0 ) = 0 is the same as the law of the process Moreover, for every pairwise different s 1 , . . . , s d ∈ R, the law of the process (X(t)) t∈R conditioned on X(s 1 ) = . . . = X(s d ) = 0 is the same as the law of The above property has been first observed by Peres and Virág [8, Proposition 12] for a modification of g(t) in which the ξ k 's are complex-valued standard normal and was an important step in their proof that the complex zeroes of this process form a determinantal point process. The same result can be found in [3,Proposition 5.1.3]. For the process g(t) itself, a similar property was used by Matsumoto and Shirai [7,Lemma 4.2] to establish the Pfaffian character of both real and complex zeroes of g(t). Recently, Poplavskyi and Schehr [9] used the Pfaffian character of the zeroes of X to compute the persistence exponent of X and several related processes.
The aim of the present paper is to classify all Gaussian processes having the above property. In the spirit of the work of Kolmogorov [5,6], we shall state the problem in purely geometric terms. Namely, we regard (X(t)) t∈R as a curve (a "helix") in the unit sphere of the Hilbert space L 2 (Ω, F, P), where (Ω, F, P) is the probability space on which (X(t)) t∈R is defined. Conditioning on X(s 0 ) = 0 corresponds to the orthogonal projection onto the hyperplane orthogonal to X(s 0 ). Because of the appearance of the function ϕ(t; s 0 ) in the above property, it is natural to pass to the projective Hilbert space and to replace orthogonal projections by the so-called spherical projections. We are led to the problem of classifying all subsets of the projective Hilbert space that do not change their isometry type under spherical projections.

Geometric result.
Let H be a Hilbert space. The unit sphere of H will be denoted by S(H) := {x ∈ H : x = 1}. The projective (or elliptic space) P(H) := S(H)/± is obtained from S(H) by identifying the antipodal points +x and −x, for all x ∈ S(H). The elements of P(H) will be denoted by ±x, ±y, and so on. The projective space is endowed with the geodesic metric ρ(±x, ±y) = arccos | x, y |. For every vector x 0 ∈ S(H) we denote its orthogonal complement by x, x 0 = 0}. Let P(x ⊥ 0 ) be the projective space constructed from the Hilbert space x ⊥ 0 . Given an element ±x 0 ∈ P(H), we define the spherical projection p ±x 0 : P(H)\{±x 0 } → P(x ⊥ 0 ) by In words, we first orthogonally project ±y to the hyperplane x ⊥ 0 and then rescale the result to have unit length. Equivalently, p ±x 0 (±y) is the point in the projective space of the hyperplane x ⊥ 0 minimizing the distance to ±y. Note that the projection is not defined for y = x 0 . Definition 1.1. We say that a set of points S ⊂ P(H) does not change its isometry type under spherical projections if for all ±s 0 ∈ S and all ±x, ±y ∈ S\{±s 0 } we have ρ(p ±s 0 (±x), p ±s 0 (±y)) = ρ(±x, ±y).
Remark 1.2. By definition of the metric d, the above can be written as Our aim is to describe all sets S having this property, up to isometry. Let us first consider some examples.
For example, we can take H := L 2 (Ω, F, P) to be the L 2 -space of the probability space on which the Gaussian process (X(t)) t∈R with covariance function (1) is defined, and then put h(t) := ±X(t) ∈ P(H). Then, the set S = {h(t)} t∈R satisfies the condition from Definition 1.1. To see this, observe that for every s 0 , x ∈ R with x = s 0 we have Given this, one easily checks that for every x = s 0 and y = s 0 , Trivially, any subset of S also satisfies the condition from Definition 1.1.
Example 1.4 (Orthogonal unions). If S α ⊂ P(H), α ∈ I, are mutually orthogonal sets such that each S α satisfies the condition from Definition 1.1, then one easily checks that their union ∪ α∈I S α also satisfies this condition. Orthogonality means that u, v = 0 for all ±u ∈ S α and ±v ∈ S β with α = β. .
Here, x > 0 and y > 0 are distinct numbers with the property that the Gram matrix of A, B, C, D is positive semi-definite. The eigenvalues of the Gram matrix are given by These formulae can be proved by comparing the characteristic polynomial of the Gram matrix with the polynomial 4 k=1 (λ − λ k ). The Gram matrix is positive semi-definite iff all λ k 's are non-negative. We always have λ 1 > 0. The set of admissible pairs (x, y), i.e. pairs for which the remaining eigenvalues are non-negative and x = y, is shown on Figure 1. We now claim that the set S = {±A, ±B, ±C, ±D} ⊂ P(H) satisfies the condition of Definition 1.1. To see this, it suffices to check the condition for s 0 = ±A (the rest follows by symmetry reasons). We have Hence, The relations for the pairs C, D and D, B can be checked similarly. Finally, we claim that S = {±A, ±B, ±C, ±D} is not isometric to a subset of the helix from Example 1.3. Our conditions on x and y ensure that the points ±A, 4). On the contrary, in the set S all pairwise scalar products can be decomposed into 3 groups each consisting of 2 equal products. Now we can state our main result classifying sets which do not change their isometry type under spherical projections. Theorem 1.6. Let S ⊂ P(H) be a set satisfying the condition of Definition 1.1. Then, we can represent S as a disjoint union S = ∪ α∈I S α of pairwise orthogonal sets S α , α ∈ I, such that each S α is isometric either to a subset of the helix from Example 1.3 or to a four-point configuration from Example 1.5.
We shall prove Theorem 1.6 and its corollaries in Section 2. The classification of Theorem 1.6 simplifies considerably if we restrict our attention to sets S which are continuous curves. Let us finally restate Theorem 1.6 in the language of Gaussian processes. Corollary 1.8. Consider a zero-mean Gaussian process Z = (Z(t)) t∈T such that for every s 0 ∈ T the conditional distribution of (Z(t)) t∈T given that Z(s 0 ) = 0 coincides with the distribution of (ϕ(t; s 0 )Z(t)) t∈T for some function ϕ(t; s 0 ). Then, there is a disjoint decomposition T = ∪ α∈I T α and a function a : T → R such that the following hold: where (X(s)) s∈R is as in (1), and Y x,y = ((Y (s)) s∈{A,B,C,D} is a zero-mean, unit vari- for some pair (x, y) which is admissible in the sense of Example 1.5.

1.3.
Metric spaces with triangle equality. In the proof of Theorem 1.6 given in Section 2 we shall reduce Theorem 1.6 to the classification of metric spaces with the following property. Here, x > 0 and y > 0 are arbitrary numbers with x = y. Theorem 1.11. If (E, d) is a metric space satisfying the triangle equality and whose cardinality is different from 4, then it is isometric to a subset of the real line. If E has exactly 4 points, then it either can be isometrically embedded into the real line or is isometric to one of the spaces from Example 1.10.
The proof will be given in Section 3.
1.4. Open questions. The property of Gaussian processes studied here was used in [8,7,9] to establish the determinantal/Pfaffian character of the zeroes of the corresponding process. It is natural to ask for a description of all (sufficiently smooth) stationary, centered, Gaussian processes whose zeroes form a Pfaffian/determinantal point process. For example, the zeromean, stationary complex-valued Gaussian process (X C (t)) t∈R with can be extended to an analytic function on the strip {t ∈ C : | Im t| < π/4} and its complex zeroes form a determinantal point process with kernel .
This can be easily derived from the result of [8] by applying the time-change (2). It is natural to conjecture that if a zero-mean, unit-variance, stationary complex Gaussian process admits an analytic continuation to some strip {t ∈ C : | Im t| < ε} and its zeroes form a determinantal point process there, then this process has the same law as (e iκt X C (αt)) t∈R for some κ ∈ R and α > 0. Similarly, one may wonder whether every stationary, smooth, zero-mean and unitvariance Gaussian process on R whose real zeroes form a Pfaffian point process is necessarily of the form (X(αt)) t∈R for some α > 0. Lemma 2.1. If for some ±x 1 , ±x 2 ∈ S we have x 1 ⊥x 2 , then every ±y ∈ S is orthogonal to at least one of the elements ±x 1 or ±x 2 .
Comparing these two results, we obtain that x 1 , y = 0 or x 2 , y = 0.
Lemma 2.2. For two elements ±x, ±y ∈ S write ±x ∼ ±y if x, y = 0. Then, ∼ is an equivalence relation on S.
Proof. It is clear that ±x ∼ ±x. Also, ±x ∼ ±y if and only if ±y ∼ ±x. We show that the relation ∼ is transitive. Let ±x ∼ ±y and ±y ∼ ±z. If, by contraposition, x is orthogonal to z, then by Lemma 2.1 we would have y⊥x or y⊥z, which is in both cases a contradiction. So, x is not orthogonal to z, which means that ±x ∼ ±z.
By Lemma 2.2, we can always decompose S into pairwise orthogonal equivalence classes and analyse these separately. In the following, we assume that S is irreducible, that is it consists of just one equivalence class. We shall now construct a set T ⊂ S(H) (not P(H)!) such that for every ±s ∈ S we have either s ∈ T or −s ∈ T , but not both. Take some arbitrary ±s 0 ∈ S and define T = {x ∈ S(H) : ± x ∈ S, x, s 0 > 0}. Note that s 0 ∈ T and x, s 0 > 0 for all x ∈ T . Lemma 2.3. We have x, y > 0 for all x, y ∈ T .
Proof. The claim is trivial if x = s 0 or y = s 0 , so let in the following x, y ∈ T \{s 0 }. We have By (4), we have p ±s 0 (±x), p ±s 0 (±y) = ± x, y .
On the other hand, Note that 1 − s 0 , x 2 1 − s 0 , y 2 < 1 and x, s 0 y, s 0 > 0. Assuming by contraposition that x, y ≤ 0, we obtain which is a contradiction.
So x, y > 0 for all x, y ∈ T and from equations (5) and (6) with s 0 replaced by an arbitrary z ∈ T we get x, y − x, z y, z for all x, y, z ∈ T such that x = z and y = z. Consider the function b(x, y) = 1 x, y , x, y ∈ T.
Then, b(x, x) = 1 and b(x, y) > 1 for x = y. The above functional equation takes the form for all x, y, z ∈ T such that x = z and y = z. Now we can introduce the function c(x, y) as the only solution of b(x, y) = 1 2 c(x, y) + 1 c(x, y) with c(x, x) = 1 and c(x, y) > 1 for x = y. The other solution is then 1/c(x, y) < 1. The above functional equation takes the form for all x, y, z ∈ T such that x = z and y = z. One easily checks that this in fact holds for all x, y, z ∈ T . If the sign on the right-hand side of (7) is positive, one gets after simple algebra c(y, z) c(x, z) + c(x, z) c(y, z) = c(x, y) + 1 c(x, y) , which implies that c(x, z) c(y, z) = c(x, y) or c(y, z) c(x, z) = c(x, y).
If the sign on the right-hand side of (7) is negative, then we similarly arrive at which implies that c(x, z)c(y, z) = c(x, y) or c(x, z)c(y, z) = 1 c(x, y) .
The latter equality is impossible if not all points x, y, z are equal because c(x, y) ≥ 1 with equality only if x = y.
To summarize: For arbitrary x, y, z ∈ T , one of the three numbers c(x, y), c(y, z), c(z, x) equals the product of the remaining two. Introducing finally d(x, y) = log c(x, y), we see that d is a metric on T that satisfies the triangle equality. By Theorem 1.11, (T, d) is either isometric to a subset A of the real line, or to a four-point metric space from Example 1.10. Observing that the scalar product x, y is related to d(x, y) via we arrive at the conclusion that (in the irreducible case) S is isometric either to a subset of the helix from Example 1.3 or to some four-point set from Example 1.5.

2.2.
Proof of Corollary 1.7. Let S := γ(R) = ∪ α∈I S α be the decomposition given in Theorem 1.6. The set γ(R), being a continuous image of a connected set, is connected in the topology induced from P(H). Since the distance between any elements ±u ∈ S α and ±v ∈ S β with α = β equals π/2, the connectedness of γ(R) implies that there is just one set S α in the decomposition. It cannot be a four-point configuration since γ(R) is infinite by the injectivity of γ. So, γ(R) is isometric to {h(t) : t ∈ A} for some set A ⊂ R. We claim that if a 1 < a 2 < a 3 are real numbers with a 1 ∈ A and a 3 ∈ A, then a 2 ∈ A. Indeed, assuming that a 2 / ∈ A, we can represent h(A) as a disjoint union of the sets {h(t) : t ∈ A, t < a 2 } and {h(t) : t ∈ A, t > a 2 }. Both sets are non-empty since they contain h(a 1 ) and h(a 3 ), respectively, and both are open in the induced topology of h(A) because h, being a homeomorphism between A and h(A), is an open map. But this is a contradiction, since h(A) is isometric to S, which is connected. The fact that h : R → P(H) is a homeomorphism onto its image follows from the formula d(h(s), h(t)) = arccos 1 cosh(t − s) . For every point t ∈ T with Z(t) = 0 a.s. we can define a class T α = {t} and put a(t) = 0. In the following, let Var Z(t) > 0 for all t ∈ T . We may even assume that Var Z(t) = 1, otherwise replace Z(t) by Z(t)/ Var Z(t). Declare two points t 1 , t 2 ∈ T to be equivalent if Cov(Z(t 1 ), Z(t 2 )) = ±1. After selecting one representative from each equivalence class and discarding the remaining elements, we may assume that | Cov(Z(s), Z(t))| < 1 for all s = t. (Note that in the statement of the corollary, ψ α is not required to be injective and, in fact, we choose it to be constant on equivalence classes). It is known that the conditional law of the process (Z(t)) t∈T given that Z(s 0 ) = 0 is the same as the law of the process (Z(t) − Cov(Z(t), Z(s 0 ))Z(s 0 )) t∈T . On the other hand, it is the same as the law of the process (ϕ(t; s 0 )Z(t)) t∈T . Comparing the variances, we arrive at 1 − Cov 2 (Z(t), Z(s 0 )) = ϕ 2 (t; s 0 ), so that ϕ(t; s 0 ) = 0 for t = s 0 . Standardizing both processes, we obtain

It follows that
Now let H be the L 2 -space of the probability space on which the process Z is defined and consider the set S := {±Z(t) : t ∈ T } ⊂ P(H). Recall from (3) that In the Hilbert space notation, the equality of the covariances of the processes in (8) implies for all x, y ∈ T \{s 0 }, which means that S satisfies the condition of Definition 1.1. Theorem 1.6 yields a decomposition T = ∪ α∈I T α such that the sets {±Z(t) : t ∈ T α } ⊂ P(H) are mutually orthogonal, which means that the Gaussian processes (Z(t)) t∈Tα are mutually independent. Moreover, for each α ∈ I, the set {±Z(t) : t ∈ T α } is isometric to h(A) for some set A ⊂ R or to a four-point configuration from Example 1.5. For concreteness, let us consider the former case. The existence of the isometry means that there is a bijection ψ α : T α → A such that arccos | Cov(Z(s), Z(t))| = arccos 1 cosh(ψ α (t) − ψ α (s)) , s, t ∈ T α .

Proof of Theorem 1.11
Consider a metric space (E, d) satisfying the triangle equality. It is clear that if E has ≤ 3 points, then it can be embedded into R isometrically.
Let the number of points in E be equal to 4. Without loss of generality, let the diameter of this space be 1. Otherwise, we can rescale the distances. Let 0 and 1 be the points in E with d(0, 1) = 1 and denote the remaining two points by X and Y . Let We have x < 1, y < 1 and d ≤ 1. Indeed, x = 1, would imply that the triangle 01X could satisfy the triangle equality only if X = 1. Similarly y = 1 is not possible. With the above notation, from the triangles 01X and 01Y we have We consider the triangles 0XY and 1XY . There are 9 cases. This completes the proof in the case of 4 points.
Let now E be a metric space consisting of exactly 5 points. Again assume that the diameter is 1 and that the points are 0, 1, X, Y, Z with d(0, 1) = 1.
Consider the quadruple {0, 1, X, Y }. It is either "classical" (that is, it can be isometrically embedded into R) or it is non-classical (that is, it is isometric to the space in Example 1.10). Our aim is to show that the latter case cannot occur. Case 1: The quadruple {0, 1, X, Z} is classical. We may then identify X and Z with two points x and z in the interval (0, 1). (Recall that the diameter of E is 1). At the moment, we don't know which of the numbers, x or z, is larger. So, we have the points 0, 1, x, z on the real line and one additional point Y outside with d(Y, 0) = 1 − x, d(Y, x) = 1, d(Y, z) =: u, d(Y, 1) = x.
The following triangles satisfy the triangle equality: