Sample path properties of permanental processes

Let $X_{\alpha}=\{X_{\alpha}(t),t\in T\}$, $\alpha>0$, be an $\alpha$-permanental process with kernel $u(s,t)$. We show that $X^{1/2}_{\alpha}$ is a subgaussian process with respect to the metric $\sigma (s,t)= (u(s,s)+u(t,t)-2(u(s,t)u(t,s))^{1/2})^{1/2}$. This allows us to use the vast literature on sample path properties of subgaussian processes to extend these properties to $\alpha$-permanental processes. Local and uniform moduli of continuity are obtained as well as the behavior of the processes at infinity. Examples are given of permanental processes with kernels that are the potential density of transient L\'evy processes that are not necessarily symmetric, or with kernels of the form $ \hat u(x,y)= u(x,y)+f(y)$, where $u$ is the potential density of a symmetric transient Borel right process and $f$ is an excessive function for the process.

When u(s, t) is symmetric and is a kernel that determines a 1/2-permanental process, Y 1/2 = {Y 1/2 (t), t ∈ T }, then Y 1/2 law = {G 2 (t)/2, t ∈ T } where {G(t), t ∈ T } is a mean zero Gaussian process with covariance u(s, t). In this case σ(s, t) = E (G(t) − G(s)) 2 (1.7) In [11] and [16] we use these observations to show that many properties of Gaussian processes which are obtained using their bivariate distributions also hold for 1/2-permanental processes. In this paper we show how such properties also hold for α-permanental processes, for all α > 0. Theorem 1.1 is the critical step in this work.
The reason we present these theorems in an appendix is because the results for the uniform modulus of continuity and local modulus of continuity in (8.6) and (8.8) are quite abstract. However, as we show in [16,Example 4.1], when T = R + and there exists an increasing function ϕ such that for all 0 ≤ s, t < ∞, σ(s, t) ≤ ϕ(|t − s|), (1.11) where 1/2 0 ϕ(u) u(log 1/u) 1/2 du < ∞, (1.12) then (8.6) and (8.8) give results for 1/2-permanental processes, that are the same as familiar results for Gaussian processes, although we don't get the best constants.
(1. 16) Note that when X 1/2 is 1/2 times the square of Brownian motion, its kernel u satisfies u(h, h) = h. In this case the upper bound in (1.15) is well known to be best possible.
The next theorem gives uniform moduli of continuity. Theorem 1.3 Let X α = {X α (t), t ∈ [0, 1]} be an α-permanental process with kernel u(s, t) and sigma function σ(s, t) for which (1.11) and (1.12) hold. Assume furthermore, that ϕ(t) is regularly varying at zero with positive index. Then We also investigate the behavior of permanental processes at infinity. The following simple limit theorem is best possible for the squares of some Gaussian processes with stationary increments. Theorem 1.4 Let X α = {X α (t), t ∈ R + } be an α-permanental process with kernel u(s, t) and sigma function σ(s, t) that satisfies (1.11) and (1.12). Then lim sup t→∞ X α (t) u * (t, t) log t ≤ 1 a.s. (1.18) Under additional hypotheses we get the familiar iterated logarithm in the denominator.
Theorem 1.5 Under the hypotheses of Theorem 1.4 assume furthermore that u(t, t) is regularly varying at infinity with positive index and ϕ 2 (t) = O(u(t, t)) as t → ∞. Then lim sup t→∞ X α (t) u(t, t) log log t ≤ 1 a.s. (1.19) We require (1.12) because we are considering the processes on R + so in order for them to behave well for all 0 < t < ∞ they must be continuous. We also want to study the behavior of permanental sequences X α = {X α (t n ), n ∈ N} in which {t n } has no limit points, or in which lim n→∞ t n = t 0 but X α is not continuous at t 0 . Processes of this sort as treated in the next theorem. Theorem 1.6 Let X α = {X α (t n ), n ∈ N} be an α-permanental sequence with kernel u(t j , t k ) and sigma function σ(t j , t k ).
The potentials in Examples 1.1 and 1.2 are not symmetric when β = 0. Prior to this paper there have not been examples of permanental processes that do not have symmetric kernels other than this case, i.e., when the kernel is the potential of a Lévy process. The next theorem shows how we can modify a very large class of symmetric potentials so that they are no longer symmetric but are still kernels of permanental processes. Theorem 1.9 Let S a be locally compact set with a countable base. Let X = (Ω, F t , X t , θ t , P x ) be a transient symmetric Borel right process with state space S and continuous strictly positive potential densities u(x, y) with respect to some σ-finite measure m on S. Then for any excessive function f of X and is the kernel of an α-permanental process.
It is easy to check that for any positive measurable function h, is excessive for X. Such a function f is called a potential function for X.
All the potential functions considered in this paper are continuous. This is discussed at the beginning of Section 6. We describe other excessive functions, some of which are not potentials, in the next two examples.
t ∈ R + } be Brownian motion killed after an independent exponential time with parameter λ 2 /2, or, equivalently, with mean 2/λ 2 . The process B has potential densities, Examples of such functions are e rx for |r| ≤ λ and q + |x| β for β ≥ 2 and q ≥ q 0 , where q 0 depends on β and λ. It follows Theorem 1.9 that for functions f that are excessive for B, is the kernel of an α-permanental process on [0, 1] for all α > 0.
The function e −λ|s−t| , s, t ∈ R + , is also the covariance of a time changed Ornstein-Uhlenbeck process.
almost surely.
We now obtain an upper bound for the behavior at infinity.
(1.42) Theorem 1.12 For any positive concave function f on (0, ∞), and α > 0, is the kernel of an α-permanental processes, Z α,f = { Z α,f (t), t > 0}, and The next theorem describes the behavior of { Z α,f (t), t > 0} as t → 0 when lim t→0 f (t)/t = ∞. Theorem 1.13 Let Z α,f be the α-permanental process defined in Theorem 1.12, for f a positive concave function on [0, ∞), with the additional property that for some δ > 0 (1.46) Then there exists a coupling of Z α,f with a gamma random variable ξ α,1 , with shape α and scale 1, such that With an additional condition on f we can describe the behavior of Z α,f near ξ α,1 more precisely. Theorem 1.14 Let Z α,f be the α-permanental process defined in Theorem 1.12, for f a positive concave function on (0, ∞) that is regularly varying at zero with index less than 1. Then where ξ α,1 is defined in Theorem 1. 13.
Other examples are given in the body of this paper. Theorem 1.1 is the main result in this paper. It deals, simply, with pairs of permanental random variables. Let Z α = (Z α (1), Z α (2)) be an α-permanental random variable with kernel (1.49) We point out in the paragraph containing (1.2) that a, b and |K| ≥ 0. In addition we can take γ ≥ 0. 1 We point out in the paragraph containing (1.6) The next theorem is the main ingredient in the proof of Theorem 1.1. (2)) be an α-permanental random variable with kernel K in (1.49). Set Then for all λ ≥ 1, for some constant C α , depending only on α.
Note that by inequality between arithmetic and geometric means and the fact that Hence if σ = 0 we have equality throughout (1.52) which implies that a = b and |K| = 0. We point out prior to the statement of Lemma 2.1 that this implies that X We are interested in local moduli of continuity and rate of growth of X 1/2 α and X α . It follows from Theorem 1.15 that when α ≤ 1/2, for some constant C α , depending only on α. It is well known; (see, e.g., [15,Lemma 5.1.3]), that for G = { G(t), t ∈ T }, a mean zero Gaussian process with covariance u(s, t)/2, (1.54) Comparing (1.53) and (1.54) it is clear that upper bounds for the rates of growth of Gaussian processes that are proved solely by using the Borel-Cantelli Lemma applied to increments of the process, should also hold for the square roots of permanental processes when α ≤ 1/2. This is the case even when α > 1/2 and the exponential in (1.53) is multiplied by a power of λ.
We collect these observations into the following lemma: We continue with the proof of Theorem 1.15. From now on we assume that γ > 0 and |K| > 0 which implies that σ > 0. We use the probability distribution of (X 1 , X 2 ) that is given in [13, Theorem 1.1] in terms of Theorem 2.1 Let X = (X 1 , X 2 ) be an α-permanental random variable with kernel K. The probability density function of X is on R 2 + , and zero elsewhere, where δ = |K| and is the modified Bessel function of the first kind.
We use the notation E(ξ; A) = E(ξI A ) for sets A.
To obtain an upper bound for the left-hand side of (2.15) when α < 1/2 we follow the argument from (2.18)-(2.23) and use (2.28) to see that it is where, as we point out above, ξ 1 is mean zero normal random variable with variance a and ξ 2 is mean zero normal random variable with variance b. Let η be a standard normal random variable. The last line of (2.40) is equal to where we use the fact that γ < √ ab. Using (2.39) and (2.41) we get (2.14).
We use the next lemma in the proof of Lemma 2.4.
Proof To prove (2.42) we first show that |K| ≤ aσ 2 . This is which is equivalent to (a − γ) 2 ≥ 0. The same argument works with a replaced by b.

Lemma 2.4
For γ > 0 and λ ≥ 1, where C α is a constant depending only on α.
2 ) so that the left-hand side of (2.44), without the absolute value signs, can be written as is the joint probability density function of (U, V ). To find an upper bound for (2.45) we note that by (2.13) With this substitution (2.45) is less than or equal to where Z is a normal random variable with mean zero and variance 1. In addition where D ′′ α is a constant depending only on α. Since δ/(ab) < 1, we get (2.44). (We can include the absolute value sign by multiplying the probability on the right by 2.) Proof of Theorem 1.15 When γ = 0 or |K| = 0 this follows from Lemma 2.1. Now suppose that γ > 0 and ||K > 0. We write Using Lemma 2.4 we see that we have the upper bound in (1.51) for the first probability on the right-hand side of the inequality in (2.53). By Lemma 2.2 the second term on the right-hand side of the inequality Taking ρ = 2λ the upper bound in (1.51) for the second probability on the right-hand side of the inequality in (2.53).
We need the next two lemmas in the proof of Theorem 1.1.
where c * is the value of c > 1 such that We have which gives (2.57).
where c * is the value of c > 1 such that for y 0 , the solution of Ky n 0 e −y 2 0 = 1.
Proof of Theorem 1.1 By hypothesis (X α (s), X α (t)) is an α-permanental random variable with kernel and that the function σ(s, t) corresponding to this is as given in (1.6). Therefore, it follows from Theorem 1.15 that for λ ≥ 1 for some absolute constant C α that depends only on α. Theorem 1.1 follows from (2.68) and Lemmas 2.5 and 2.6.
The next lemma is used in the proof of Lemma 2.2.
Proof Since a and b are interchangeable, it suffices to prove the first inequality in (2.69). Suppose that a − γ ≥ 0. Then if, in addition, b > γ, (2.70) (Note that since σ > 0, this holds when a = γ.) Next, suppose that a > γ and b < γ. Then, since If a < γ, then b > γ, and It follows, as in (2.72), that 3 Upper bounds for the local moduli of continuity and rate of growth of permanental processes, I All the results in this section follow from the next lemma which is a modification of an inequality of Fernique as presented in [8, Chapter IV.1, Lemma Assume furthermore, that there exist an increasing function ϕ such that for S > 0 and all 0 ≤ s, t ≤ S, Let n be an integer greater than 1. Then where n(p) = n 2 p and κ is a positive function with κ ≥ 1.
To give the reader some idea of where we are heading we mention that in using (3.3) we take both a and S to depend on n in such a way that the right-hand side of (3.3) is a converging sequence in n. This enables us to use the Borel-Cantelli Lemma to get upper bounds for the limiting behavior of Proof of Lemma 3.1 Consider [8, Chapter IV.2, Lemma 1.1]. This lemma is proved for S = 1. (It also assumes that Y is a Gaussian process, but only uses F (a) as defined in (3.1), in which case the right-hand side of (3.1) is independent of s and t.) Let us first assume that S = 1. The only other thing in [8, Chapter IV.1, Lemma 1.1] that might be confusing is the term α which is Thus we get (3.3) with S = 1. In particular we require that If we replace ϕ( · ) with ϕ(S · ) we get (3.3) for arbitrary S. Here we also use the fact that the right-hand side of (3.1) is defined for all s, t ∈ [0, S].
Lemma 3.1 can used to find upper bounds for the local and uniform moduli of continuity of Y and the behavior of Y (t) as t → ∞. This is done in [8, Theorem 1.3, Chapter IV.2] for the local and uniform moduli of continuity of Gaussian processes G = {G(t), t ∈ R + }, but the same proof also gives the behavior of G(t) as t → ∞. We generalize [8, Theorem 1.3, Chapter IV.2] in the case of local modulus of continuity and behavior at infinity by applying Lemma 3.1 to stochastic processes with the property that where m ≥ 0 and C m is a constant.
holds. Assume that there exist an increasing function ϕ such that for all Consider the condition Similarly, assume that (3.10) holds uniformly in Proof We first prove (3.14). Fix 1 < θ < 2 and set V = θ n . Choose n 0 such that θ n 0 ≥ V 0 . Consider Lemma 3.1 with S = θ n for n ≥ n 0 . Let ǫ > 0 and take Note that by (3.7) there exists a constant C such that for all n ≥ n 0 sufficiently large.
Consider the last term in (3.3). Take κ(p) = (3 log n(p)) 1/2 . We have that for some constant C and all n sufficiently large. Considering (3.17) and (3.18) we see that the right-hand side of (3.3) is a converging sequence in n.
The statement in (3.12) follows similarly by taking θ less than 1.  (3.12) here but only point out that it basically the same as the proof of (3.14). The result in (3.14) is not contained in [8]. Also contained in [8, Chapter IV, Theorem 1.3] is a uniform modulus of continuity of the type given in Theorem 3.2. We do not use it because it doesn't give the constant 1 on the right-hand side of (3.74).
Examples of the processes Y that we are studying are the processes X 1/2 α in Theorem 1.1. We see from (2.68) that (3.7) holds. Therefore, we can use Lemma 3.2.
In Theorem 3.1 we consider an important class of processes for which we can lower the upper bound in (3.12) so that it is best possible. It uses the next lemma which is an immediate consequence of (2.68).
, t ∈ [0, 1]} be an α-permanental process with bounded kernel u(s, t) and sigma function σ(s, t). Then for any sequence {s n , t n } in (0, 1] × (0, 1], such that s j = t j for all j ∈ N, } be an α-permanental process with kernel u(s, t) and sigma function σ(s, t) for which (3.8) and (3.9) hold for some function ϕ(t) that is regularly varying at zero with positive index. Then,
Proof We first prove (3.36). Let θ < 1 and consider Lemma 3.1 applied to Since it follows from (2.68) that (3.7) holds we see by Lemma 3.1 that If we take a = ((3 + ǫ) log n) 1/2 and κ(p) = (3 log n(p)) 1/2 , as in the proof of Lemma 3.2, we see that for all n sufficiently large, the second line of ( (3.41) infinitely often, has probability zero.
Note that by the condition that ϕ(t) is regularly varying at zero with positive index for some constant C and all n sufficiently large. Therefore, for some constant C ′ and all n sufficiently large. This shows that the integral in (3.41) is o((log n) 1/2 ϕ(S n )) as n → ∞.
Using the regular variation hypothesis again we see that for some β > 0 and all n sufficiently large. Therefore, for any ǫ > 0, by taking 1 − θ sufficiently small, the right-hand side of (3.41) infinitely often, is zero, for all ǫ > 0. By Lemma 3.3, since σ(s, t) ≤ ϕ (|t − s|), Combining this with the statement in the sentence containing (3.46), we get Since ϕ(t) is asymptotic to a monotonically increasing function near zero we get, a.s. (3.49) and since this holds for any ǫ > 0 we get (3.36). By (3.46) and the fact that ϕ(θ n+1 ) ≤ Cu 1/2 (θ n+1 , θ n+1 ), The proof of (3.37) is more subtle. We use the following lemma.
Lemma 3.4 Let X = {X(t), t ∈ T } be an α-permanental process with kernel u(s, t) and sigma function σ(s, t) and such that u (t, t) > 0 for all t ∈ T . Let (3.53) Then X = { X(t), t ∈ T } is an α-permanental process with kernel u(s, t) = u(s, t) (u(s, s)u(t, t)) 1/2 . (3.54) Let σ(s, t) be the sigma function of ( X s , X t ). Then Proof To verify (3.54) note that for any t 1 , . . . , t n , where U is the n × n matrix with entries u(t i , t j ) and Λ is a diagonal matrix with entries λ i , 1 ≤ i ≤ n. Let D be is the diagonal matrix with entries which gives (3.54).
Proof of Theorem 3.1 continued To prove (3.37) we consider X α (t) and σ(s, t) as defined in (3.53) and (3.55) for t ∈ (0, 1]. Let θ < 1. Then by Lemma 3.4, for all θ n+1 ≤ s, t ≤ θ n , Since ϕ is regularly varying with index, say β > 0 we see that for all ǫ > 0, there exists an n ′ 0 such that for all Also since ϕ regularly varying it is asymptotic to an increasing regularly varying function with index β. To simplify the proof let us simply take ϕ to be increasing. By hypothesis ϕ 2 (h) ≤ Cu * (h, h) for all h sufficiently small. Therefore for all h ∈ [θ n+1 , θ n ], which implies that ϕ 2 (θ n+1 ) ≤ Cu * (θ n+1 , θ n+1 ). Using (3.60) we see that for all ǫ ′ > 0 and θ sufficiently close to 1, and θ n+1 ≤ s, t ≤ θ n , It follows from this that the right-hand side of (3.41) applied to X α with dominating function ϕ n , is less than ǫ ′ (log n) 1/2 for all n sufficiently large. Consequently, the probability that almost surely as n → ∞. This gives which is (3.37).
The next lemma gives relationships between the sigma function of a kernel and its majoring function ϕ. Lemma 3.5 Let X be an α-permanental process with kernel u(s, t) and suppose that u(0, 0) = 0. Then σ 2 (0, t) = σ 2 (t, 0) = u(t, t). (3.67) If, in addition, σ 2 (s, t) is a function of |t − s| then, necessarily, In general, if ϕ satisfies (3.8) Proof Note that for any α-permanental process X Proof Theorem 1.15 asserts that X α has subgaussian increments. Then (3.74) follows from [14,Theorem 4]. In this reference the theorem is proved for Gaussian processes with stationary increments but it only uses the estimate in (1.53), along with the fact that ϕ is greater than the L 2 estimate for the increments of the Gaussian process. The difference of the factor 2 is explained by the observation in the line prior to (1.54 is written as σ(h) = exp(−g(log 1/h)) and it is required that 1/g ′ (log 1/h) = o(log 1/h). Note that when σ(h) = Ch α , for some constant C, 1/g ′ (log 1/h) = 1/α, much weaker than what is allowed. However it isn't necessary to require that σ is differentiable. All the estimates in the proof of [14, Theorem 4] that use the condition 1/g ′ (log 1/h) = o(log 1/h) follow easily form the condition that ϕ is regularly varying at zero with positive index. For example when ϕ is regularly varying at zero with index β, instead of [14, (2.16)], we have as t k → 0, for all ǫ > 0. It is also required that σ(h) is concave, but wherever this is used it is easy to get the same estimates when ϕ is regularly varying at zero with index β ≤ 1, which is always the case.
α (s); s, t ∈ R + } is primarily to use the results obtained to study the behavior of {X α (t)−X α (s); s, t ∈ R + }. The next lemma does this.
Proof The statement in (3.79) is trivial. For (3.78) we note that The result for the uniform modulus follows similarly since The proof follows from Lemma 3.2 applied to Y (t) = X 1/2 (t), Lemma 3.6 and the next lemma.  Note that The integrand of the last integral above is the derivative of −2(log 1/u − log 1/(2h)) 1/2 . Consequently, (4.7) Combining these relationships we get (4.3).
It is interesting to note that for certain functions ϕ, except for a multiplicative factor, we can reverse the inequality in (4.3). (4.9) Proof We have Consequently, we get (4.9).
We now examine the relationship between σ(s, t) and the L 2 metric for Gaussian processes. Let ρ 2 (s, t) := u(s, s) + u(t, t) − (u(s, t) + u(t, s)). (4.14) Although we don't require that u(s, t) is symmetric, when u(s, t) is symmetric ρ(s, t) = σ(s, t). In general we get the next lemma.
and v is symmetric If, in addition inf s,t∈I u(s, t) ≥ δ, for some interval I, then for all s, t ∈ I, Proof The inequality in (4.19) follows immediately from (4.15). To obtain (4.20) note that for a < b Consider u(s, t) in (4.18) and suppose that h(t) > h(s). Then by (4.21), for s, t ∈ I  Obviously Consequently, it follows from (4.20) that σ 2 (s, t) is bounded by λ|t − s| as s, t → 0, whereas (4.19) only gives that is bounded by (λ + r)|t − s| as s, t → 0.
Proof of upper bounds in Theorem 1. 7 We show in [11,Section 5] that where R γ,β is symmetric and H γ,β is antisymmetric. Explicitly, and By Lemma 4.3 the sigma function for Y α:γ,β , which we denote by σ T 0 :γβ , satisfies where, for the last equation we use the facts that R γ,β is symmetric, H γ,β is antisymmetric and H γ,β (x, x) ≡ 0. It is easy to see that Using this and (4.26), we get Since ϕ is regularly varying at zero with positive index we see that the upper bounds in Theorem 1.7 follows from Theorems 1.2, 1.3 and 1.5.
We refer to the permanental process with kernel u T 0 ,γ,β as FBMQ γ,β . We use this notation because when β = 0, u T 0 ,γ,β is the covariance of fractional Brownian motion of index γ, (i.e. FBM). We add the Q, for quadratic, to denote the square of this process, as one does in the designation of the squared Bessel processes, (BESQ).
Proof of Theorem 1.8 It follows from [11, Lemma 5.2] that Consequently, as in (4.29) (1 − cos λ(x − y)) Re(ρ + ψ γ,β (λ)) |ρ + ψ γ,β (λ)| 2 dλ (4.33) We write It follows from [11, (5.39)] that the first integral to the right of the inequality sign is equal to πC γ,β |x−y| γ . The second integral to the right of the inequality sign is bounded by 2 ρ where θ = |x − y|. The first of these integrals is O(θ 2 ) as θ → 0. The third is and O(θ 1+2γ ) log 1/θ if γ < 1/2, all as θ → 0. Thus we see that the first integral in (4.33) is bounded by 2C γ,β |x − y| γ . We now consider the last integral in (4.33) It follows from [11, (5.40)] that the first integral to the right of the equal sign in (4.36) is equal to πβ sign (x − y)C γ,β |x − y| γ . We now show that the second integral is little o of this. Using (4.37) we see that the last integral in (4.36) is bounded by 2ρ times It is easy to see that the first integral to the right of the equal sign in (4.38) is O(θ) as θ → 0 and the second is O(θ 1+2γ ) as θ → 0. Therefore, the absolute value of the second integral in (4.33) is bounded by 2|β|C γ,β |x − y| γ .
Using the bounds for the last two integrals in (4.33) we see that the same as in (4.31). Since ϕ is regularly varying at zero with positive index we see that the upper bounds in Theorem 1.7 follows from Theorems 1.2, 1.3 and 1.5.
Proof of Theorem 1.10 The α-permanental process X α,f has kernel u f (s, t). We see from (4.20) in Corollary 4.2 that for s, t ∈ [0, δ/λ], the sigma function of X α,f satisfies for all δ sufficiently small. Using this and the fact that f ∈ C 2 implies that for t > s, Therefore, Theorem 1.10 follows from Theorems 1.2 and 1.3.

Rate of growth of permanental processes at infinity
We begin by considering another important class of processes for which we can decrease the upper bounds that can be obtained by (3.14). First we need some preliminary results.

(5.2)
(It is interesting to note that since E(X(t)) = αu(t, t), if we were to consider the rate of growth of X(t)/E(X(t)), it would depend on α. The results we give for the rate of growth of X(t)/u(t, t) do not depend on α.) The next observation is elementary.
For any ǫ > 0 we can find a δ > 0 so that is bounded uniformly in δ ≤ 1. Therefore, for any ǫ > 0 we can find a δ so that the right hand side of (5.7) is ≤ ǫ(log n) 1/2 for all n sufficiently large. It follows from this that the probability that sup t∈[nδ,(n+1)δ] |X 1/2 (t) − X 1/2 (nδ)| > ǫ(log n) 1/2 (5.10) infinitely often, is zero, for all ǫ > 0. Note that sup t∈[nδ,(n+1)δ] It follows that Therefore, using (5.11) we see that Writing log n = log nδ + log 1/δ we see from (5.4) that the first term to the right of the inequality in (5.13) is less than or equal to 1 almost surely. By (5.10), the second term to the right of the inequality in (5.13) is bounded by ǫ/(u * (T 0 , T 0 )) 1/2 almost surely. Since this is true for all ǫ > 0 we get (1.18).
Since u(t, t) is regularly varying at infinity it is asymptotic to a monotonic function at infinity. Therefore, Since this holds for all θ > 1 we get (1.19).
Proof of upper bound in Theorem 1.11 Let is an α-permanental process with sigma function Note that Consequently For any ǫ > 0, choose s 0 so that Therefore, lim t→∞ f (t) ≤ ǫ for all ǫ > 0.
We now give some background material that may be needed to understand where C 0 ≥ 0 is a constant and f (t) is a positive concave function that is o(t) at infinity. To see this, note that f ′ r (t), the right hand derivative of f , is decreasing in t.
we see that f is positive. In addition, since lim t→∞ f ′ r (t) = 0, f is o(t) at infinity. Proof of upper bounds in Theorem 1.12 To prove (1.44) it suffices to work with the α-permanental process { Z α,f (t), t ≥ 1}. This process has kernel u f (s, t) = s ∧ t + f (t), s, t ≥ 1. Note that u f (t, t) = t + f (t) is increasing, and by (5.32), is regularly varying at infinity with positive index. We see from (4.19) in Lemma 4.3 that when f is concave the sigma function of Z α,f satisfies t)) as t → ∞. Therefore the upper bound in (1.44) follows from Theorem 1.5 and (5.32).

Partial rebirthing of transient Borel right processes
Let S a be locally compact set with a countable base. Let X = (Ω, F t , X t , θ t , P x ) be a transient Borel right process with state space S, and continuous strictly positive potential densities u(x, y) with respect to some σ-finite measure m on S. Let ζ = inf{t | X t = ∆}, where ∆ is the cemetery state for X, and assume that ζ < ∞ a.s. Let µ be a finite measure on S. We call the function a left potential for X. Since u(x, y) is continuous in y uniformly in x and µ is a finite measure we see that f (y) is continuous. See [6, Section 2] The next theorem, which is interesting on its own, is also used in the proof of Theorem 1.9. Note that it does not require that u is symmetric. In this theorem we add a point * to the state space S of X and modify X so that instead of going to ∆ it goes to * . We then allow the process to return to S from * with a probability p < 1, or to go to ∆ with probability 1 − p. Let X denote the modified process on the enlarged space. We see that when X "dies", X has a chance to be reborn, after which it continues to evolve that way X did. Theorem 6.1 Let X = (Ω, F t , X t , θ t , P x ) be a transient Borel right process with state space S, as above. Then for any left potential f for X, there exists a transient Borel right process X = (Ω, F t , X t , θ t , P x ) with state space S = S ∪ { * }, where * is an isolated point, such that X has potential densities u(x, y) = u(x, y) + f (y), x, y ∈ S (6.2) u( * , y) = f (y), and u(x, * ) = u( * , * ) = 1, with respect to the measure m on S which is equal to m on S and assigns a unit mass to * .
Proof We construct X as described prior to the statement of this theorem. Let ρ be the total mass of µ. If X starts in S it proceeds just like X until time ζ, at which time it goes to * . It stays there for an independent exponential time with parameter 1 + ρ, ρ > 0, after which it returns to S with initial law µ/(1 + ρ).
(This is what we mean by partial rebirthing.) Once in S, X continues as we just described for X starting in S. Since the measure µ/(1 + ρ) has total mass ρ/(1 + ρ), after each visit to * , X only has probability ρ/(1 + ρ) to be reborn. With probability 1/(1 + ρ) the process enters a cemetery state ∆.
We now calculate the potential densities for X. Let g be a function on S with g( * ) = 0. Then for any This is equal to u(x, y)g(y) dm(y) + 1 1 + ρ ∞ n=1 ρ 1 + ρ n−1 dµ(z) u(z, y)g(y) dm(y) = u(x, y)g(y) dm(y) + f (y)g(y) dm(y), which gives the first line of (6.2). The first half of the second line of (6.2) follows from a similar computation, where now we no longer have the first term in the second line of (6.3). Finally, since at each visit to * the process waits there an independent exponential time with parameter 1 + ρ, and then returns to * with probability ρ/1 + ρ), we have, for some sequence of functions The same computation holds if we start at * .
The next lemma is used in the proof of Theorem 1.9.
Lemma 6.1 Assume that for each n ∈ N, u (n) (s, t), s, t ∈ S, is the kernel of an α-permanental process. If u (n) (s, t) → u(s, t) for all s, t ∈ S, then u(s, t) is the kernel of an α-permanental process.
Proof By the hypothesis, for all k and x 1 , . . . , x k ∈ S, there exists an αpermanental vector X (n) Therefore, by definition, for all s 1 , . . . , s k ≥ 0, It follows from the extended continuity theorem [9,Theorem 5.22], that there exists a random vector (X α (x 1 ), . . . , Since this is true for all k and all x 1 , . . . , x k ∈ S, it follows by the Kolmogorov extension theorem that {u(s, t), s, t ∈ S} is the kernel of an α-permanental process.
Proof of Theorem 1. 9 We apply Lemma 6.1 twice to prove the theorem. Consider a general excessive function f . It follows from [2, II, ( 2.19)] that there exists a sequence of functions g n ≥ 0, with both g n and U g n (x) = S u(x, y)g n (y) dy (6.7) bounded such that f (x) is the increasing limit of U g n (x). If the g n are in L 1 then, since u is symmetric, U g n is a left potential as in (6.1). Therefore, by Theorem 6.1, {u(s, t) + U g n (t), s, t ∈ S} are kernels of α-permanental processes. Consequently, by Lemma 6.1, {u(s, t) + f (t), s, t ∈ S} is the kernel of α-permanental process. If g n is not in L 1 we proceed as follows: Let C m be an increasing sequence of compact sets whose union is S. Then g n 1 Cm ∈ L 1 , so that by Theorem 6.1 {u(s, t) + U (g n 1 Cm ) (t), s, t ∈ S} is the kernel of an α-permanental process. Taking the limit as m → ∞, it follows from Lemma 6.1 that {u(s, t) + U g n (t), s, t ∈ S} is the kernel of an α-permanental process. Since U g n → f we can use Lemma 6.1 again to see that {u(s, t) + f (t), s, t ∈ S} is the kernel of α-permanental process. Remark 6.1 Theorem 1.9 shows that there exists an α-permanental process Z α (t), t ∈ S with the kernel given in (1.34). The same proof also shows that there exists an α-permanental process {Z α (t), t ∈ S} ∪ Z α ( * ) with the kernel given in (6.2) for any function f which is excessive for X.

Lower bounds
We use results from [17] to obtain lower bounds for the rate of growth of permanental process or for their behavior at 0. There are several different situations that can arise depending on the kernels of the permanental processes. We give several criteria that can be used on kernels that behave differently. Lemma 7.1 Let X α = {X α (t), t ∈ R + } be an α-permanental process with kernel u(s, t) such that u(t, t) > 0 for all t ∈ R + . Set u(s, t) = u(s, t) (u(s, s)u(t, t)) 1/2 s, t ∈ R + . (7.1) Let {t j } ∞ j=1 be a sequence in R + . Set φ 2 (i, j) = 2 − ( u(t i , t j ) + u(t j , t i )) and (φ * n ) 2 = inf 1≤i,j≤n i =j φ 2 (i, j).
To complete the proof we use the next two lemmas.
Proof of lower bound in Theorem 1.11 This is an immediate application of Lemma 7.1. Consider { X α,f (nj), j ∈ N}. It is easy to see that sup 1≤j,k≤n j =k u f (nj, nk) = sup 1≤j,k≤n j =k f (nk) + e −λn|k−j| (7.22) Remark 8. 1 We have pointed out on page 3, that when {u(s, t); s, t ∈ T } is the potential density of a transient Markov process, {d C,σ (s, t); s, t ∈ T } defined in (1.6) and (1.9), is a metric on T . In general, if we only assume that {u(s, t); s, t ∈ T } is a kernel of α-permanental processes, we don't know whether d C,σ is a metric. Actually Theorems 8.1 and 8.2 still hold if {d C,σ (s, t); s, t ∈ T } is not a metric. We continue to define B d C,σ (t, u) = {s; d C,σ (s, t) ≤ u} (8.11) and everything goes through. (This is the approach we took in [16] which we wrote before we knew that when {u(s, t); s, t ∈ T } is the potential density of a transient Markov process, {d C,σ (s, t); s, t ∈ T } is a metric on T . See, in particular, [