Bridges of quadratic harnesses

Quadratic harnesses are typically non-homogeneous Markov processes with time-dependent state space. Using an appropriately defined affine transformation we show that all bridges of a given quadratic harness can be transformed into other standard quadratic harnesses. Conversely, each such bridge is an affine transformation of a standard quadratic harness. We describe quadratic harnesses that correspond to bridges of some Levy processes. We determine all quadratic harnesses that may arise from stitching together a pair of q-Meixner processes.


Introduction and main results
1.1. Quadratic harness property. Throughout the paper F = (F s,t ) is a family of sigma fields with s < t from a nonempty open (generalized) interval T = (T 0 , T 1 ) ⊂ (−∞, ∞) such that F s,t ⊂ F r,u for r, s, t, u ∈ T with r ≤ s ≤ t ≤ u. We include T 0 = −∞ or T 1 = ∞ among the possible choices for the end-points of T .
An integrable stochastic process X = {X t : t ∈ T } is called a harness [14,18,20] on T with respect to F if X u is F s,t measurable if u ≥ t or if u ≤ s, and for any s, t, u ∈ T with s < t < u, All integrable Lévy processes are harnesses with respect to their natural filtration ( [15, (2.8)]); additional examples appear in references on quadratic harnesses that are mentioned after Definition 1.1. For a square-integrable process, a natural second-order extension of (1.1) is the requirement that Var(X t |F s,u ) is a quadratic function of X s , X u . It turns out that this assumption is much more restrictive than (1.1) alone. Weso lowski [19] determined that there are only five such Lévy processes. Under certain additional assumptions [3,Theorem 2.2] asserts that there exist numerical constants η, θ ∈ R σ, τ ≥ 0 and γ ≤ 1 + 2 √ στ such that for all s < t < u, In this paper, we adopt these formulas as the definition, but following [1, Corollary 5.1] for finite intervals T we allow negative values of parameters σ, τ , and we allow any γ ≥ −1.
Definition 1.1. Let T = (T 0 , T 1 ) ⊂ (0, ∞). We will say that a square-integrable stochastic process X = (X t ) t∈T is a quadratic harness on T with respect to (F r,s ), if X u is F s,t measurable if u ≥ t or if u ≤ s, Examples of quadratic harnesses on (0, ∞) are five Lévy processes with quadratic conditional variances from [19]. Other examples include the classical versions of certain free Lévy processes ( [5,Theorem 4.3]), classical versions of q-Brownian motion ([5, Theorem 4.1]), bi-Poisson process [4,6,7] and Markov processes with Askey-Wilson laws ([1, Theorem 1.1]). Remark 1.1 (Caution). While the use of general time interval (T 0 , T 1 ) is convenient for the purpose of studying transformations, the reader should be aware that most of the fundamental results (uniqueness, integrability, orthogonal polynomial martingales) valid for (0, ∞) do not extend automatically to quadratic harnesses on finite intervals, and may require additional assumptions on the interval and on the process. For example, from the of proof of [3,Theorem 4.1] one can deduce that the moments of a quadratic harness on (0, T ) with −1 < γ ≤ 1 − 2 √ στ are determined uniquely if the conditional moments of X t and X 2 t with respect to past σ-field F s for s < t are linear and quadratic functions of X s respectively. This result fails for intervals bounded away from 0, see Example 1.2 or when the assumption on one sided conditioning is dropped, see Example 1.1.

1.2.
Results. Our main result shows how to transform a generic harness with quadratic conditional variances and with a product covariance into a quadratic harness.
Theorem 1.1. Let X be a harness (1.1) with respect to the family F on an interval such that ad − bc > 0 and (at + b)(ct + d) > 0 on (T 0 , T 1 ). Suppose that where F t,s,u is non-random, and Let ψ(t) = (dt − b)/(a − ct). Then stochastic process is a quadratic harness in QH(η ′ , θ ′ ; σ ′ , τ ′ ; γ ′ ) on the interval aT0+b cT0+d , aT1+b cT1+d ⊂ (0, ∞), and has parameters Theorem 1.1 was motivated by a re-parametrizations of orthogonality measures of the related families of orthogonal polynomials when σ = 0, which was discovered by R. Szwarc; this observation is presented in an unpublished manuscript [2]. A version of Theorem 1.1 is implicit in the construction of quadratic harnesses from Markov processes based on the Askey-Wilson integral, see [1,Formula (2.28) and Section 6.2].
A related transformation Y t = tX 1/t produces Y ∈ QH(θ, η; τ, σ; γ), i.e. entries within pairs (η, θ) and (σ, τ ) are swapped. In fact, these two elementary facts can be used to re-write formulas (1.9-1.13) into other "equivalent forms". We also remark that the transformation used in Theorem 1.1 is reversible, so X can be "represented" in terms of the quadratic harness Y ∈ QH(η ′ , θ ′ ; σ ′ , τ ′ ; γ ′ ) as Next, we apply Theorem 1.1 to analyze/classify processes that arise through double-sided conditioning. For a (random or deterministic) real function X of real parameter t, denote In the sequel, if X is clear from the context, to shorten the notation we will sometimes write ∆ s,u instead of ∆ s,u (X).
The right hand side of (1.2) is a quadratic polynomial in two real variables which we will write as It will be convenient to write K(a, b) instead of K a b .
is non-zero with probability one. Define for t > 0. Then for 0 < s < t < ∞, conditional version of (1.4) holds: and condition (1.2) holds: for s < t < u, with F R,V -measurable parameters: and with non-random , (Constant F t,s,u is then given by formula (1.3) with σ, τ , γ replacing σ, τ, γ.) We now specialize this same result to Markov processes. Notice that quadratic harnesses have cadlag versions, so regular versions of conditional distributions exist. If Z is a quadratic harness with respect to {F s,t } on (T 0 , T 1 ) and R < V are in (T 0 , T 1 ), then conditionally on Z(V ), Z(R), the process still has quadratic conditional variances with respect to F s,u when s, u ∈ (V, R), and of course the conditioned process takes deterministic values at the endpoints V and R. Furthermore, the conditional variance (1.2) is positive with probability one. Thus there is a set of probability one of pairs z R ∈ supp(Z(R)), z V ∈ supp(Z(V )) such that the laws are well defined for all t ∈ (V, R) and that there are Borel sets U s of π s -measure one such that are well defined for all s < t in (V, R) and all x ∈ U s . It is known that (1.24) and (1.25) determine a Markov process if Z is Markov, or more generally, if Z is the so called reciprocal process, see [17,Theorem 4.1].
Remark 1.2. Analogous results with essentially the same proof hold for one sided conditioning on (0, V ) and on (R, ∞). Formulas for parameters in these two cases correspond to Theorem 1.2 after taking the limit as R → 0 or as V → ∞, respectively, with conventions that lim More specifically, suppose Z is a quadratic harness on (0, ∞) in QH(η, θ; σ, τ ; γ).
(i) Let X = {X(t) : 0 ≤ t ≤ V } be the conditional process arising by conditioning Z with respect to Z V = z V . Assume that Then is a quadratic harness with parameters : R ≤ t < ∞} be the conditional process arising by conditioning Z with respect to Z R = z R . Assume that Then Y is a quadratic harness with parameters Conditioning allows us to identify new quadratic harnesses, as seen from the following.
The range of parameters in Theorem 1.3 corresponds formally to q = 1 in [1], and perhaps could be obtained from the processes in that paper by a limiting procedure.
Proofs of Theorems Then E(X t ) = 0 and Cov(X s , X t ) = s(1 + v 2 t). Furthermore, X t is a harness with respect to its natural sigma fields, and So from Theorem 1.1 (or by direct calculation) we see that is a quadratic harness on (0, 1/v 2 ) with respect to natural σ-fields and has param- Example 1.2. This example is a time-inversion of Example 1.1. Let (W t ) be the Wiener process and ξ be a centered random variable independent of W with It is also easy to see that X t and X 2 t − t are martingales with respect to the past σ-field. Next, we give a simple example of a quadratic harnesses with γ > 1 and στ > 1; such examples are interesting because most of the general theory developed in [3] does not apply.
Of course, the distribution of ξ is arbitrary so the moments of Z t are not determined uniquely and may fail to exist.

Matrix notation
For calculations, it will be convenient to parametrize time as a subset of a projective plane, i.e. using t = t 1 . We rewrite (1.1) in vector form as where a, b = a T b, and the components of ∆ s,u (X) are defined by (1.15). It follows from (1.1) that admissible expectations of a harness X are affine in t, i.e., Moreover, if X is a square integrable harness then by [3, Proposition 2.1] the admissible covariances are of the form where Σ = c 0 c 1 c 2 c 3 and s = s 1 .
Throughout this paper, letters s, t, u ∈ T are reserved to denote time, and s, t, and also u = [u, 1] T have this special meaning also when used with subscripts or primed. We also use the convention that s ≤ t ≤ u.
Note that under our convention s ≤ t so Σ is not a symmetric matrix; for example, covariance s ∧ t is represented by matrix Σ = 0 1 0 0 .
Formula (1.2) in matrix form can be written as Here η, θ, σ, τ, γ are constants independent of s, t, u.
Of course matrix Γ is determined only up to Γ 1,2 + Γ 2,1 . The usual choice of symmetric Γ is in fact inconvenient, see Proposition 3.2. The choice made in (2.6) matches the notation we used in previous papers: after substituting q for γ, the resulting parametrization of the conditional variance is identical to [3, (2.14)].
The non-random constant F t,s,u is determined uniquely by taking the average of both sides of (2.5). According to [3, (2.15)], with the choice of Γ as in (2.6), formula (1.3) holds. We re-write formula (1.3) in matrix notation using a special matrix It is easy to see that J 2 = −I, J T = −J. For future reference we state also two less obvious properties: for A ∈ GL 2 (R), Formula (1.3) can now be written as This formula makes sense for any 2 × 2 matrix Γ as long as the denominator is non-zero. Finally, we note for future reference that the conditional covariance for quadratic harnesses also takes a simple form: with s < t 1 < t 2 < u, where K(a) = 1 + θ, a + a, Γa is the quadratic polynomial from (1.16) and (2.5).
A quick way to see this is to notice that (1.1) implies

Deterministic time and space transformation
With a non-degenerate affine function f : R 2 → R 2 , written in matrix notation as we associate Möbius transform ϕ(t) = (at + b)/(ct + d) generated by If X is a stochastic process on an open interval T ⊂ R, and T lies in the range of ϕ, we define a deterministic transformation X f of the stochastic process X as the process We note that ϕ is increasing on S if det(A) > 0 and it is decreasing otherwise. Our interest in this transformation comes from the fact that special cases appeared as ad hoc tricks in constructions of quadratic harnesses in [6,1,16].
A calculation verifies that as long as the time domains of the processes match, This allows us to build more complicated transformations in simple steps, and gives us flexibility to consider either Y = X f or X = Y f −1 as needed.
For non-degenerate affine f with Möbius transform ϕ : (T 0 , T 1 ) → (T ′ 0 , T ′ 1 ) and s < t in T , the transformed σ-fields are It is clear that if X has linear regressions and quadratic conditional variances with respect to F s,u then X f has linear regressions and quadratic conditional variances with respect to F f s,u . The following technical result describes how the parameters transform in the setting slightly more general than Theorem 1.1.  Var(X t |F s,u ) = F t,s,u χ + θ, ∆ s,u + ∆ s,u , Γ∆ s,u , with non-random F t,s,u , χ ∈ R, θ ∈ R 2 , and arbitrary 2 × 2 matrix Γ. Let f be a non-degenerate affine function with matrix A = a b c d , corresponding Möbius transform ϕ and shift m. If ϕ is well defined on the entire interval (T 0 , T 1 ) then X := X f given by and (1.1) and (3.4) with We remark that in the most interesting case of "product covariance" transformation (3.6) preserves this product form. In fact, in this case Σ = εφ εψ δφ δψ , but if we collect the coefficients of the covariance (3.10) into another matrix then a calculation based on (3.6) shows that for det A > 0 the covariance of X corresponds to Θ = ΘA.
We postpone the proof of Proposition 3.1 until Section 3.1, so that we can first clarify the role of non-random constant F t,s,u . The main point is that in the nondegenerate case with c 1 > c 2 , this constant is determined uniquely by taking the average of both sides of (3.4). Furthermore, F t,s,u is often given by formula (2.9), i.e., (1.3).
Proposition 3.2. Suppose a harness X has mean (2.2) and non-degenerate covariance (2.4) with c 1 > c 2 . If X has quadratic conditional variance (3.4) and the off-diagonal entries of matrix Γ are chosen so that , and from the matrix form of (1.15) we have Noting that (3.14) (cs + d)s ′ = As, and using (2.8) we get Proof of Proposition 3.1. Throughout the proof we write t ′ = ϕ(t). If ϕ is increasing, by (2.1) and the definition of X f we have By (3.14) and (3.12) we get Thus the condition (1.1) holds true and X f is a harness. Similarly, one can verify that (1.1) holds when ϕ is a decreasing function. We use (3.14) to compute the mean of X f E(X f (t)) = (ct + d) t ′ , µ + t, m = At, µ + t, m = t, m + A T µ , and (3.5) follows.
To find the covariance we again use (3.14) and the fact that and thus (3.6) follows.

So (3.17) rewrites as
Since the last term is invariant under transposition, we get (3.8) and (3.7). The case det(A) > 0 is handled similarly and the proof is omitted.
The proof of Propistion 3.2 is based on the formula for the covariance matrix of vector ∆ s,u .

Proof. From (3.13) we get
Note that since s T Σu = u T Σ T s and s u T − u s T = (u − s)J T , the numerator can be written as Next, we note that (This follows from a longer calculation based on the formulas from Proposition 3.9.) Therefore, transformation formulas preserve (3.11).
Next, we show that (3.11) implies (2.9). This will be accomplished by computing the averages of both sides of (3.4).
Moreover, while the quadratic part of K does not determine the entries of Γ uniquely, it is natural to choose a unique, perhaps non-symmetric, matrix Γ such that tr(ΓΣ T ) + K(µ) = 0, i.e. (3.11) holds. Then F t,s,u is given by formula (2.9) which is just a matrix form of (1.3).
Remark 3.1. It is interesting to remark that in the non-degenerate product case (3.10) with det Θ > 0, formula (3.19) takes the form Remark 3.2. If c 3 ≥ 0, c 1 > c 2 , and c 0 c 3 > c 2 2 then the right hand side of (2.4) indeed defines a positive definite function on T = (0, ∞). To verify this, let a(s, t) = s, Σt for s < t and c(s, t) = a(s ∧ t, s ∨ t). For s 0 = 0 < s 1 < · · · < s n , we compute .j≤n is positive definite as a sub-matrix of the above.
From the proof of Proposition 3.2 we can also read out that processes with c 1 = c 2 are degenerate in the sense that X t is a linear combination of X s , X u . From the transformation formulas we get µ = 0, Σ = 0 1 0 0 , and χ = χ + αη + θβ + σα 2 + τ β 2 + 2ραβ > 0 by (1.7). We also get The quadratic polynomial K remains unchanged if we replace Γ by Rewriting the quadratic polynomial as we get the parameters as claimed.
Proof of Theorem 1.2 ′ . To prove that V (1 + Rσ) + τ − Rγ > 0 we note that V → V (1 + Rσ) + τ − Rγ is a continuous function on R which by Proposition 3.2 cannot cross zero on (R, ∞). The remainder of the proof consists of computing the parameters of X so that we can apply Theorem 1.1. First, from (2.1) we see that the mean of X is given by Finally, we observe that the conditional variance of X is given by the same formula as the conditional variance for Z. Therefore we can apply Theorem 1.1 with (Other choices are also possible and lead to "equivalent" quadratic harnesses as in (1.14).) Under the above choice of a, b, c, d, formula (1.8) gives (1.17). Since χ = K(∆ RV , ∆ RV ) > 0, assumption (1.7) holds, and the parameters of the resulting quadratic harness are as claimed.

Conditioning of quadratic harnesses
In this section we show how to apply previous results to analyze which quadratic harnesses can be constructed from already know quadratic harnesses either by conditioning. Our basic building blocks will be the q-Meixner processes: these are quadratic harnesses with η = σ = 0 and γ = q ∈ [−1, 1], see [5]. In particular, the five classical Lévy processes mentioned in the introduction, sometimes called Meixner processes, correspond to γ = 1.
Our main interest in such constructions stems from the fact that the constructions from the Askey-Wilson laws [1] yield only γ ∈ (−1, 1 − 2 √ στ ); in particular "classical" quadratic harnesses with the boundary value of γ = 1 − 2 √ στ that appear in [3,Proposition 4.4] need additional work and have been constructed only for particular choices of the parameters. Example 4.1 and Example 4.2 present two new quadratic harnesses with γ = 1 − 2 √ στ , and in Remark 4.1 we mention two more cases.

4.1.
Conditioning of Meixner processes. We can use Theorem 1.2 ′ to recognize which quadratic harnesses could arise by conditioning from quadratic harnesses that are Lévy processes. Proposition 4.1. Fix τ ≥ 0 and θ ∈ R. Suppose Z ∈ QH(0, θ; 0, τ ; 1) i.e., Z is a Meixner process. If X is a conditional bridge of Z then Proof. From Theorem 1.2 ′ we read out the transformation that leads to parameters Further transformation (1.14) leads to (4.1) and to parameters as claimed.
Example 4.1 (Dirichlet process). For any σ 0 , τ 0 > 0 with σ 0 τ 0 < 1, there exists a quadratic harness Y (namely, a Dirichlet process) which has parameters σ Y = σ 0 , Indeed, consider the case of conditional harnesses obtained from a gamma process (G t ) t>0 . Gamma process (G t ) is a non-negative two-parameter Lévy process with parameters α, β > 0. The density of G t given by As a Lévy process, (G t ) is a harness with mean E(G t ) = tα/β and variance Var(G t ) = tα/β 2 . It is also known (see (4.6) below) that Then is a quadratic harness in QH(0, 2/α; 0, 1/α 2 ; 1) which by further transformation (1.14) can be transformed into an element of QH(0, 2; 0, 1; 1). Instead of considering a conditional process of (G t ) we therefore consider a conditional process of Z ∈ QH(0, 2; 0, 1; 1) and apply Proposition 4.1. Choose Transformation (1.14) with a 2 = σ 0 /τ 0 gives By (4.3) with any ∆ := ∆ RV ≥ 0 Since conditional processes of the gamma process are Dirichlet processes, the same conclusion can be obtained directly by a fairly natural reparametrization (4.7) without invoking explicitly any of the transformations. Let a 1 , . . . , a n , a n+1 be positive numbers. A Dirichlet distribution D n (a 1 , . . . , a n , a n+1 ) is defined through its density is called a Dirichlet process if there exists a finite nonzero measure µ on [0, V ] such that for any n and any 0 ≤ t 1 < . . . < t n ≤ V the distribution of the vector of increments (X t1 , X t2 − X t1 , . . . , X tn − X tn−1 ) is Dirichlet This is one of the basic objects of non-parametric Bayesian statistics -see [10,12]. Let µ = cλ, where λ is a Lebesgue measure on [0, V ] and c = 1/α > 0 is a number. Recall that the beta distribution, B I (a, b), is defined by the density and if X ∼ B I (a, b) then .
Since X t has the beta distribution B I (ct, c(V − t)) the formulas (4.5) give .
Note that to compute E(X s X t ) it is convenient to use the classical fact, that X s /X t and X t are independent and X s /X t is a beta B I (cs, c(t − s)) random variable. Note also that X is Markov process with transition distribution defined by the fact that (X t − X s )/(1 − X s ) and X s are independent, and (X t − X s )/(1 − X s ) is beta B I (c(t − s), c(V − t)). It is also known that and (X t − X s )/(X u − X s ) and (X s , X u ) are independent. Therefore, from (4.5) we get E(X t |F s,u ) = X s + (X u − X s ) t − s u − s and thus X is a harness. The second formula in (4.5) gives (4.6) Var .
Define now It is elementary to check that (Y t ) t∈[0,∞) is a quadratic harness and the parameters are as follows (Note that this agrees with the answers deduced from Proposition 4.1 which implies that ) On the other hand, it can be easily seen that the process X is a bridge on the gamma process (G t ) t∈[0,∞) governed by the gamma distribution with the shape parameter 1/c and the scale equal 1. More precisely, in distribution process X is identical to the gamma bridge [13,Definition 2], see also [11].
Indeed, consider conditional harnesses obtained as bridges from a Poisson process. Poisson process N t with parameter λ > 0 is a harness with mean E(N t ) = λt variance Var(N t ) = λt, and with conditional variance with respect to natural σfields given by Instead of considering a conditional process of (N t ) t>0 we therefore consider a conditional process of Z and apply Proposition 4.1. From (4.2) we see that γ X = 1, σ X = τ X = 0. Indeed, to simplify the notation, consider R = 0 . Since The conditional processes of the Poisson process are the Binomial processes. So the same conclusion can be obtained more directly without invoking explicitly any of the transformations. (Compare [5,Proposition 4.4].) Let b(n, p) denote the binomial distribution with sample size n and probability of success p. For fixed N ∈ N, define a Markov process X = (X t ) t∈[0,V ] by the following (consistent) family of marginal and conditional distributions: Then the process X is called a binomial process with parameter N . It is elementary to see that the conditional distribution X t |F s,u ∼ b Y u − Y s , t−s u−s . Therefore X is a harness i.e. (1.1) holds, and for any s, t, u ∈ [0, V ], s < t < u An easy computation (or an application of Theorem 1.1) shows that if then the process (Y t ) t≥0 is a quadratic harness and the parameters are θ Y = V /N , On the other hand, it is immediate that X is a bridge obtained by conditioning a Poisson process It is interesting to see which properties of a quadratic harness are preserved by conditioning. The only universal invariant that we found is related to parameter q of the Askey-Wilson law from the construction in [1].
(iii) If Z is a q-Meixner process, i.e. η Z = σ Z = 0, γ = q ∈ [−1, 1] and the conditioning is with R = 0 and z R = 0, then Proof. The first statement follows from (1.23). Formula (4.8) follows by a direct computation from (1.21-1.23). Formula (4.9) follows from the expressions in Re- Indeed, from Proposition 4.1 we see that In particular, quadratic harness Y that arises by conditioning from the negative binomial process Z has parameters (In fact, Y rises from a discrete "negative hypergeometric" processes.) Similarly, quadratic harness Y that arises from a conditional bridge of the "hyperbolic Meixner" process Z with θ 2 Proof of Theorem 1.3. From (1.14) it is enough to construct a quadratic harness with parameters σ X = τ X ∈ (0, 1), θ X = −η X and γ X = 1 − 2σ X . To do so, we determine the parameters of Z from (4.2) and (4.3), noting that for a Meixner processes ∆ RV = (Z V − Z R )/(V − R) can take any positive real value that we fix first.

Gluing construction
This section is motivated by the construction of a classical bi-Poisson process from a pair of two conditionally independent Poisson processes [6] and by another recent construction of a quadratic harness from two conditionally independent copies of a negative binomial process [16]. These constructions essentially consist of choosing an appropriate deterministic moment of time V so that Z Vconditional (and Y V -conditionally independent) processes X + := {Z t : t > V }|Z V and X − := {Z t : t < V }|Z V arise as space time transforms of the above mentioned Lévy processes.
We remark that in principle all quadratic harnesses arise from such a gluing construction. A fixed V > 0 can be treated as the upper value V in Remark 1.2 resulting in the process X − = (Z t ) t≤V conditioned with respect to Z V , and on the other hand, we can treat this value as the left-hand side value R in Remark 1.2 and consider the process X + = (Z t ) t≥R conditioned with respect to Z R . Then both processes are quadratic harnesses and by Markov property they are Z Vconditionally independent. So for example, a Poisson process arises from gluing a binomial process with another Poisson process, or a Wiener process arises from gluing a Brownian bridge with another Wiener process.
The question of interest here is when the "components" X − and X + to be glued are in some sense "simpler" than the resulting process. Using Corollary 1.2 ′ and Remark 1.2 we can recognize when the components of such a gluing construction come from a well understood class of q-Meixner processes. (Since we are proving necessity only, we do not analyze X + .) Proposition 5.1. Let Z ∈ QH(η, θ; σ, τ, γ) be defined on (0, ∞). Suppose that there is a deterministic moment of time V > 0 such that Markov process X − obtained as {Z t : t < V } conditioned with respect to Z V , can be transformed via (1.27) into a q-Meixner process Y, then one of the following cases must happen: (i) γ = 1, σ = τ = 0 and η = θ = 0. (Then X is the Wiener processes and the construction indeed works with any V > 0.) (ii) γ = 1, σ = τ = 0 and ηθ > 0. (Then V = θ/η, X is the Poisson processes with parameter λ which depends on Y V and the construction indeed works, see [6].) (iii) η √ τ = θ √ σ, and γ = 1 + 2 √ στ > 1. (Then V = τ /σ and X ∈ QH(0, θ X ; 0, τ X ; 1) with the sign of θ 2 X − 4τ X determined by the sign of θ 2 − 4τ of process Y.) Proof. If Y comes from gluing a q-Meixner process then the conditional bridge Y − corresponding to R = 0, V = V exists and can be transformed into a q-Meixner process X with parameters given in Remark 1.2(i).
We remark that [16] provided gluing construction of two conditionally-independent copies of negative binomial processes. The resulting quadratic harness on (0, ∞) reaches the "upper limit" γ = 1 + 2 √ στ of the bound in [3, Theorem 2.2] and that for this process the product στ can take arbitrarily values in (0, ∞). We do not yet know whether processes in Proposition 5.1 indeed exist for all signs of θ 2 − 4τ .