Fluctuation theory for spectrally positive additive L\'evy fields

A spectrally positive additive L\'evy field is a multidimensional field obtained as the sum $\mathbf{X}_{\rm t}={\rm X}^{(1)}_{t_1}+{\rm X}^{(2)}_{t_2}+\dots+{\rm X}^{(d)}_{t_d}$, ${\rm t}=(t_1,\dots,t_d)\in\mathbb{R}_+^d$, where ${\rm X}^{(j)}={}^t (X^{1,j},\dots,X^{d,j})$, $j=1,\dots,d$, are $d$ independent $\mathbb{R}^d$-valued L\'evy processes issued from 0, such that $X^{i,j}$ is non decreasing for $i\neq j$ and $X^{j,j}$ is spectrally positive. It can also be expressed as $\mathbf{X}_{\rm t}=\mathbb{X}_{\rm t}{\bf 1}$, where ${\bf 1}={}^t(1,1,\dots,1)$ and $\mathbb{X}_{\rm t}=(X^{i,j}_{t_j})_{1\leq i,j\leq d}$. The main interest of spaLf's lies in the Lamperti representation of multitype continuous state branching processes. In this work, we study the law of the first passage times $\mathbf{T}_{\rm r}$ of such fields at levels $-{\rm r}$, where ${\rm r}\in\mathbb{R}_+^d$. We prove that the field $\{(\mathbf{T}_{\rm r},\mathbb{X}_{\mathbf{T}_{\rm r}}),{\rm r}\in\mathbb{R}_+^d\}$ has stationary and independent increments and we describe its law in terms of this of the spaLf $\mathbf{X}$. In particular, the Laplace exponent of $(\mathbf{T}_{\rm r},\mathbb{X}_{\mathbf{T}_{\rm r}})$ solves a functional equation leaded by the Laplace exponent of $\mathbf{X}$. This equation extends in higher dimension a classical fluctuation identity satisfied by the Laplace exponents of the ladder processes. Then we give an expression of the distribution of $\{(\mathbf{T}_{\rm r},\mathbb{X}_{\mathbf{T}_{\rm r}}),{\rm r}\in\mathbb{R}_+^d\}$ in terms of the distribution of $\{\mathbb{X}_{\rm t},{\rm t}\in\mathbb{R}_+^d\}$ by the means of a Kemperman-type formula, well-known for spectrally positive L\'evy processes.


Introduction
A spectrally positive, additive Lévy field (spaLf) is defined by where X (j) = t (X 1,j , . . . , X d,j ), j = 1, . . . , d, are d independent R d -valued Lévy processes such that X i,j are non decreasing for i = j and X j,j is spectrally positive (here t u means the transpose of the vector u ∈ R d ). SpaLf's can be considered as (non-trivial) extensions in higher dimension of spectrally positive Lévy processes and the purpose of this article is to develop fluctuation theory for such random fields. We refer to Chapter VII of [3] for a complete account on fluctuation theory for spectrally one sided Lévy processes, see also [9] and [13] (Chapter VII of [3] deals with the case of spectrally negative Lévy processes but the results can easily be transferred to the spectrally positive case).
The particular pathwise features of spaLf's allow us to define their first passage times T r = (T (1) r , . . . , T (d) r ) at multivariate levels −r ∈ (−∞, 0] d as the smallest of the indices t = (t 1 , . . . , t d ) satisfying X t = −r in the usual partial order of R d . The distribution of the variables (T r , X Tr ), r ∈ [0, ∞) d can then be related to the distribution of the field {X t , t ∈ [0, ∞) d }, where X t = (X i,j tj ) 1≤i,j≤d . In doing so we obtain some fluctuation-type identities in the general framework of multivariate stochastic fields. These results provide an intrinsic motivation for the present study that can be considered in the line of several works on additive Lévy processes from Khoshnevisan and Xiao, see for instance [11]. The original motivation comes from an extension of the Lukasiewicz-Harris coding of Bienaymé-Galton-Watson trees through downward skip free random walks. In [7], the authors proved that multitype Bienaymé-Galton-Watson trees can be coded by multivariate random fields   d j=1 S i,j nj , i = 1, . . . , d   , n j = 0, 1 . . . , j = 1, . . . , d, where t (S 1,j , . . . , S d,j ), j = 1, . . . , d are d independent Z d -valued random walks such that S i,j are non decreasing for i = j and S j,j is downward skip free. These random fields are the discrete time counterparts of spaLf's which suggests the possibility of coding continuous multitype branching trees in an analogous way. It seems quite complicated to achieve such a result as the notion of continuous multitype tree is not clearly defined for general mechanisms. However, reducing the analysis to processes rather than trees, one may still consider the Lamperti representation which provides a pathwise relationship between branching processes and their mechanism. This representation can be extended to continuous time multitype branching processes by using spaLf's. It was done in [6] for the discrete valued case and in [5] and [10] for the continuous one. More specifically, let Z = (Z (1) , . . . , Z (d) ) be a continuous time multitype branching process issued from r ∈ [0, ∞) d . Then Z can be represented as the unique pathwise solution of the following equation, where X (j) , j = 1, . . . , d, are Lévy processes as described above. Now recall that 0 is an absorbing state for Z. Then it follows from the above equation that the path of Z up to its first passage time at 0 is entirely determined by the path of the spaLf up to its first passage time T r at level −r. This fact which is plain in the case d = 1 will be proved in the general case in the upcoming paper [8], where extinction of continuous time multitype branching processes is characterized through path properties of spaLf's.
Spectrally positive additive Lévy fields The next section consists in an important preliminary lemma for deterministic paths whose aim is to prove the existence of first passage times of spaLf's and to derive their first basic properties. Then in Section 3 we will turn our attention to the law of these first passage times. In particular we will prove that in analogy with the one dimensional case, their Laplace exponent is the inverse of the Laplace exponent of the spaLf. The situation for d ≥ 2 differs significantly from the one dimensional case as we first need to give necessary and sufficient conditions for the multivariate hitting times T r to be finite on each coordinate, with positive probability, for all r ∈ [0, ∞) d . (When d = 1, this is equivalent to saying that the spectrally positive Lévy process is not a subordinator.) Another fundamental difference concerns the matrix valued field X Tr which is simply equal to −r on the set T r < ∞, when d = 1. In Section 4 we will focus on the law of the field (T r , X Tr ) and prove that its Laplace exponent solves a functional equation leaded by the Laplace exponent of the spaLf X. This equation, see (4.1) in Theorem 4.1 below, can be compared to the classical Wiener-Hopf factorization involving the ladder processes of spectrally positive Lévy processes. Then in Theorem 4.3 the distribution of (T r , X Tr ) will be fully characterized in terms of the distribution of the original stochastic field X, through an extension of Kemperman's formula, see Corollary VII.3 in [3]. More specifically, our result states that the measure , where we set 1 = (1, 1, . . . , 1). In order to prove it, we will use a similar identity recently obtained in [7] and [6] in the discrete time and space settings together with a discrete approximation.

A preliminary lemma in the deterministic setting
We use the notation R + = [0, ∞), R + = [0, ∞] and [d] = {1, . . . , d}, where d ≥ 1 is an integer. The zero vector of R d will be denoted by 0. For s = (s 1 , . . . , s d ) and Recall that a real valued function x : R + → R is said to be càdlàg, if it is right continuous on R + and has left limits on (0, ∞). Such a function is said to be downward skip free if for all s ≥ 0, x(s) − x(s−) ≥ 0, where we set x(0−) = x(0). We also say that x has no negative jumps. We will use the notation x t or x(t) indifferently.
We emphasize that according to our definition, some of the coordinates of the smallest solution of the system (r, x) may be infinite. Note also that in (2.1) it is implicit that next lemma is a continuous time and space counterpart of Lemma 1 in [6]. The proof of the present result follows a similar scheme, however we need to perform it here as it requires more care. It is done in the Appendix at the end of this paper.
1. There exists a solution s = (s 1 , . . . , s d ) ∈ R d + of the system (r, x) such that any other solution t of (r, x) satisfies t ≥ s. The solution s will be called the smallest solution of the system (r, x).
2. Let s and s be the smallest solutions of the systems (r, x) and (r , x), respectively. If r ≤ r, then s ≤ s. Moreover if (r n ) n≥0 is non decreasing with lim n→∞ r n = r then the sequence (s n ) n≥0 of smallest solutions of (r n , x) satisfies lim n→∞ s n = s.
3. Let s be the smallest solution of (r, x).

4.
The smallest solution s of (r, x) satisfies s i = inf t :

Fluctuation theory for additive Lévy fields
Vectors of R d will be denoted by x = (x 1 , . . . , x d ) and e i = (0, . . . , 0, 1, 0, . . . , 0) will be the i-th unit vector of R d + . We recall the notation t x for the transpose of any vector x ∈ R d and the notations 1 = t (1, 1, . . . , 1), 0 = t (0, 0, . . . , 0). We will set x, y , x, y ∈ R d for the usual scalar product on R d and |x| for the Euclidian norm of x.
. . , i n = j, for some n ≥ 1, such that m i k ,i k+1 = 0, for all k = 1, . . . , n − 1. For two matrices A and B of M d (R), with columns a (1) , . . . , a (d) and b (1) , . . . , b (d) , respectively, we define the following special product, is called essentially nonnegative (or a Metzler matrix) if a i,j is nonnegative whenever i = j. For instance, for any element x = {(x i,j tj ) i,j∈ [d] , t ∈ R d + } of the set E d introduced at the previous section, the matrix x t = (x i,j tj ) i,j∈ [d] is essentially nonnegative for all t = (t 1 , . . . , t d ) ≥ 0.
Spectrally positive additive Lévy fields

SpaLf's and their first hitting times
In this work, we shall consider d independent Lévy processes X (1) , . . . , X (d) on R d + , such that with the notation X (j) = t (X 1,j , . . . , X d,j ), for all j ∈ [d], the process X j,j is a real spectrally positive Lévy process, that is, it has no negative jumps, and for all i = j, the Lévy process X i,j is a subordinator. We emphasize that the processes X 1,j , . . . , X d,j are not necessarily independent. Moreover, we do not exclude the possibility for a process X i,j to be identically equal to 0 and note that for each i ∈ [d], X i,i can be a subordinator. It is known, see Chap. VII, in [3], that the Lévy process X (j) admits all negative exponential moments. We denote by ϕ j its Laplace exponent, that is Then from Lévy Khintchine formula and the above assumptions on X (j) , ϕ j has the following form, is an essentially nonnegative matrix, q j ≥ 0 and π j is a measure on R d + such that π j ({0}) = 0 and Note that for all j ∈ [d], ϕ j is log-convex, i.e. the function log ϕ j is convex on (0, ∞) d . In particular, ϕ j is a convex function. Moreover, for all i = j and λ 1 , . . . , λ i−1 , λ i+1 , . . . , λ d , the function λ i → ϕ j (λ) is non increasing.
Let us now define the multivariate stochastic field , for t = (t 1 , . . . , t d ) ∈ R d + .
Then X := {X t , t ∈ R d + } is a particular case of additive Lévy field in the sense of [11]. Its law is characterized by the Laplace exponent ϕ := (ϕ 1 , . . . , ϕ d ), that is Such an additive Lévy field will be called a spectrally positive additive Lévy field (spaLf). This terminology is justified by the results of this section which extend fluctuation theory for spectrally positive Lévy processes. Let us also introduce the field of essentially nonnegative matrices Note that the spaLf X can be defined as X t = X t · 1, where 1 = t (1, 1, . . . , 1). Moreover, we emphasize that the spaLf X carries on the same information as the field of essentially nonnegative matrices {X t , t ∈ R d + }. For this reason, the terminology 'spaLf' will refer indifferently to X or to X.
Example. Let us give an example of a 2-dimensional spaLf. Assume that, for j ∈ [2], the X j,j 's are independent Brownian motions B (j) with drifts a j ∈ R, that is X j,j t = B (j) t + a j t and that for i = j, X i,j is a pure drift, that is X i,j t = a ij t, a ij ≥ 0. Then the spaLf is written as follows, t1 + a 1 t 1 + a 12 t 2 , a 21 t 1 + B (2) t2 + a 2 t 2 ), t = (t 1 , t 2 ) ∈ R 2 + , the Laplace exponents ϕ j are explicitly given by and the associated field of essentially nonnegative matrices is Now let us define the first hitting times of negative levels of the spaLf X. Let r = (r 1 , . . . , r d ) ∈ R d + , since X ∈ E d a.s., according to Lemma 2.3 there is almost surely a smallest solution to the system We will denote by T r = (T r ) this solution and use the notation . (

3.3)
Then T r will be referred to as the (multivariate) first hitting time of level −r by the spaLf {X t , t ∈ R d + }. Note that according to Lemma 2.3, some of the coordinates of T r can be infinite.
Proposition 3.1. Let X be a spaLf and for r ∈ R d + , let T r be its first hitting time of level −r as defined above. Then, has the same law as the field {T r , r ∈ R d + } and it is independent of the field {T r , r ≤ r }. In particular, for all r, r ∈ R d whereT r is an independent copy of T r .
Proof. The first assertion is a consequence of quasi-left continuity for Lévy processes.
Indeed, let us denote by (F (j) t ) t≥0 the natural filtration generated by X (j) and set F (j) . Then for all t j ≥ 0, the set belongs to the sigma-field G (j) , so that T It clearly implies (3.4).
In order to prove 2. it suffices to see that conditionally on , r (1) + ... + r (p) = r , and the T (i) 's are independent copies of T. As a consequence, we . Now let us prove the second part of this assertion. Let r ∈ (0, ∞) d be such that P(T r ∈ R d + ) > 0 and let λ ∈ R d + , then by (3.5), for Since f is right continuous in r, this equation implies that f (λ, r) = e − r,φ(λ) , for some φ(λ) ∈ R d . Furthermore take r = re i , for some r > 0 and i ∈ [d], so that E[e − λ,Tr ] = e −rφi(λ) . Then from right continuity, T r > 0 almost surely, so that f (λ, r) < 1, for all λ ∈ (0, ∞) d and thus φ i (λ) ∈ (0, ∞). On the other hand it is plain from (3.6), that the φ j 's are concave functions for all j ∈ [d] and that φ is differentiable.
s. This is due to the fact that X i,j are subordinators for i = j, therefore either X i,j ≡ 0 a.s. or X i,j ∞ = ∞ a.s.
Let us emphasize the following direct consequence of Proposition 3.1, so that in particular P(T r ∈ R d + ) = 1, for all r ∈ (0, ∞) d if and only if φ(0) = 0. Note also that Proposition 3.1 does not allow us a full description of the law of the d-dimensional stochastic field {T r , r ∈ R d + }. This is the case only when d = 1. In particular for d ≥ 2, if r and r are not ordered, then we do not know the joint law of (T r , T r ). Moreover, looking at part 2. of Proposition 3.1, one is tempted to think that, when d ≥ 2, the field {T r , r ∈ R d + } is a spaLf, but it is actually not the case. Indeed from the construction of this field, the processes {T rei , r ≥ 0}, i ∈ [d] are clearly not independent. However, it is easy to derive from Proposition 3.1, that each of these processes is a multivariate subordinator whose Laplace exponent is φ i . The following result, proved in [4] for d = 1, provides an expression of its Lévy measure. Since it is a consequence of further results (e.g. Theorem 4.3), it will be proved at the end of this paper.
, the process {T rei , r ≥ 0} is a multivariate subordinator whose Laplace exponent is φ i given in (3.6). Assume moreover for all j ∈ [d] and t j > 0, the j-th column X (j) tj of the matrix X t admits a density which is continuous on Then the Lévy measure of the multivariate subordinator {T rei , r ≥ 0} is given by in which row and column of index i have been removed and

Inverting the Laplace transform of spaLf's
We will now define a d-dimensional Lévy process whose law is obtained from the law of X (j) through the Esscher transform associated to the martingale t ) t≥0 denotes the natural filtration generated by X (j) . Then for t ≥ 0 and A ∈ F (j) t , the law of this new Lévy process is defined by Let us now consider d independent Lévy processes X µ (j) ,(j) , j ∈ [d] with respective laws P µ (j) . The Laplace exponent of X µ (j) ,(j) is given by Moreover, a new spaLf is obtained by setting t d ) and the law of the spaLf X µ is given by, where we have setφ(µ) = (ϕ 1 (µ (1) ), . . . , ϕ d (µ (d) )) and where we recall that µ, tj . We will refer to (3.9) as the Esscher transform of the additive field X.
The Laplace exponent of X µ is then Let us denote by J ϕ (λ), λ ∈ (0, +∞) d , the transpose of the negative of the Jacobian matrix of ϕ, that is (3.10) Recall that since all processes X i,j , i, j ∈ [d], are spectrally positive Lévy processes, their expectation is always defined and E[ (3.11) Then let us consider the following hypothesis: This hypothesis implies in particular that none of the processes X j,j , j ∈ [d] is a subordinator but it is actually stronger as we will see later on. Moreover since all

2.
Suppose that (H) holds, then φ(λ) ∈ D, for all λ ∈ (0, ∞) d . Moreover, the mapping φ : (0, ∞) d → D is a diffeomorphism whose inverse corresponds to the mapping ϕ : Proof. Assume that (H) holds, let µ ∈ D and let us consider the spaLf X µ whose law is defined in (3.9). In the present case, µ also denotes the matrix whose each column is equal to µ. Then as already observed µ ∈ (0, ∞) d , so that all the random variables X µ,i,j 1 are integrable and the mean matrix of X µ is given by It is actually the transpose of the negative of the Jacobian matrix of ϕ denoted by J ϕ (µ) and defined in (3.10). Note that J ϕ (µ) is an essentially nonnegative matrix so that from Lemma A.2 in [2], there is a real eigenvalue ρ µ such that Re(ρ) < ρ µ for all the other eigenvalues ρ. Moreover, since ϕ j is a differentiable convex function and ϕ j (0) = 0, one so that from Theorem 3 of [1], J ϕ (µ) T , and therefore J ϕ (µ), is a stable matrix in the sense of [1]. In particular, ρ µ < 0.
From the law of large numbers of Lévy processes, we obtain Therefore, from part 3. of Lemma 2.3, {X µ t , t ∈ R d + } reaches each level αv µ , with α < 0, almost surely. Then from the definition (3.9) of the law of X µ , the field {X t , t ∈ R d + } reaches each level αv µ , α < 0, with positive probability and since v µ Now let us assume that J ϕ (µ) is not irreducible that is there exists a permutation matrix P σ and three matrices A 1 , A 2 and B such that A 1 is of size 1 ≤ p ≤ d − 1 and Therefore we can write for all r ∈ R d + , Let T µ,I r be the smallest solution of the system (r I , X µ,I ), where we set r I = (r i ) i∈I and X µ,I = (X µ,i,j ) i,j∈I . Then conditioning on the event {T µ,I r ∈ R p + }, we obtain is the smallest solution of the system (r , X µ,J ) with X µ,J = (X µ,i,j ) i,j∈J . Thus if A 1 and A 2 are irreducible, then we derive from the previous case that P(T µ,I On the other hand, if A 1 and/or A 2 are not irreducible, then we can repeat this argument.
Conversely, let us assume that T r ∈ R d + holds with positive probability for all r ∈ R d + .
Recall from part 3 of Proposition 3.1 the definition of the function φ. Let us show that for all λ ∈ (0, ∞) d , ϕ(φ(λ)) = λ, which implies in particular that φ(λ) ∈ D. It follows from the independence and stationarity of the increments of the where C r is the union of all the sets E 1 × · · · × E d with at least one i ∈ [d] such that E i =] − r i , +∞[ and for the others j ∈ [d], E j = R. Then we derive the identity Let r , r ∈ (0, ∞) d be such that r + r = r, then from Proposition 3.1, T r can be decomposed as T r = T r +T r , whereT r is an independent copy of T r . Moreover If the coordinates of r are integers, then applying this identity recursively, we obtain, Then we can find t whose coordinates are sufficiently small so that for all j, (3.13) we derive that the left member of (3.12) tends to 1, while the right member tends to e − λ,t e ϕ(φ(λ)),t , which shows that ϕ(φ(λ)) = λ. This is true in particular for all λ ∈ (0, ∞) d and hence D is not empty. This achieves the proof of both assertions 1. and 2.
From part 1. of Theorem 3.3, assuming (H) for a spaLf X ensures that X hits all negative levels in a finite time with positive probability. When d = 1, this is simply assuming that the spectrally positive Lévy process we consider is not a subordinator.
Example. Let us go back to our 2-dimensional example. Assume that q j > 0, j ∈ [2], where q j is defined in (3.1). After some calculations, we obtain the following explicit form of the set D defined in hypothesis (H), and i = j. Note that this set is not empty and so assumption (H) holds. In particular, thanks to Theorem 3.3, the spaLf X reaches all the level −r ∈ R 2 − with positive probability and according to the second part of this theorem, we know that the mapping ϕ admits an inverse φ on the set D. This inverse φ = (φ 1 , φ 2 ) is given by Moreover φ is the Laplace exponent of the field of first hitting times of negative levels by X defined for all r = (r 1 , r 2 ) ∈ R 2 + by t1 + a 1 t 1 + a 12 t 2 = −r 1 a 21 t 1 + B (2) t2 + a 2 t 2 = −r 2 .

Asymptotic behaviour of spaLf's
In order to carry on with the general study of the fluctuation of the spaLf X, we shall now give a characterization of the condition φ(0) = 0 in terms of the Jacobian matrix J ϕ (0). As a first remark, note that if for some j ∈ [d], J ϕ (0) j,j > 0, then lim t→+∞ X j,j t = +∞ a.s. and hence the field {X t , t ∈ R d + } cannot reach all the levels −r ∈ R d − with probability one. Therefore, by Proposition 3.1, φ(0) > 0 whenever there is j such that J ϕ (0) j,j > 0.
Recall that whenever the essentially nonnegative matrices J ϕ (λ), defined in (3.10) and (3.11) for λ ∈ [0, ∞) d have finite entries and are irreducible, according to the Perron-Frobenius theory, there are real eigenvalues ρ λ with multiplicity equal to 1 and such that the real part of any other eigenvalue is less than ρ λ , see Appendix A of [2]. We set ρ 0 = ρ.  , the function f j : a ∈ R → ϕ j (µ + au). Let us first note that since ϕ j is convex, so is f j . Furthermore, for all j ∈ [d], we have f j (0) = ϕ j (µ) = 0 = ϕ j (0) = f j (−||µ||). On the one hand, if there exists j ∈ [d] such that µ j = 0, then for all a ∈ R, µ j + au j = 0 that is f j (a) = ϕ j (µ + au) ≤ 0. Since 0 and −||µ|| < 0 are zeros of the real convex function f j , it implies that f j is constant equal to 0. In other words, for all t ≥ 0,
EJP 25 (2020), paper 161. Therefore, {X t , t ∈ R d + } reaches a.s. all the levels αu, α < 0 and from Proposition 3.1 it reaches all the levels −r ∈ R d − a.s. We conclude from (3.7) that φ(0) = 0. Assume that ρ = 0. Let u = (u 1 , . . . , u d ) be a right eigenvector corresponding to ρ, then from the law of large numbers, is a real Lévy process such that It implies that for all direction v ∈ R d + , the Lévy process λ, X tv tends to ∞ in probability (and hence almost surely), as t → ∞. In particular, for v = u, there exists i ∈ [d] such that Y i t tends to ∞ almost surely, as t → ∞, which is a contradiction. In conclusion, φ(0) = 0.
Assuming (H), we will say that the additive Lévy field (X t , t ∈ R d + ) drifts to −∞, oscillates or drifts to +∞ according as ρ < 0, ρ = 0 or ρ > 0. Example. In our example, we already have the explicit form of ϕ, the set D and the inverse φ. Let us now find the solutions of the equation ϕ(λ) = 0, λ ∈ R 2 + . Assume that J ϕ (0) is irreducible, that is a ij > 0 for all i = j. Then the solutions of the equation ϕ(λ) = 0, λ ∈ R 2 + are 0 = (0, 0) and points of the form It is easy to check that there is only one solution of the second kind. It belongs to (0, +∞) 2 or it is equal to 0. According to the expression of φ, φ(0) is this solution. We can show that φ(0) = 0 if and only if a 1 < 0, a 2 < 0 and a 1 a 2 ≥ a 1,2 a 2,1 . Furthermore, we can compute the Perron-Frobenius eigenvalue ρ of the Jacobian J ϕ (0). It has the form ρ = a 1 + a 2 + (a 1 − a 2 ) 2 + 4a 1,2 a 2,1 2 .
Then it is easy to see that ρ ≤ 0 if and only if a 1 < 0, a 2 < 0 and a 1 a 2 ≥ a 1,2 a 2,1 . In conclusion, we find φ(0) = 0 ⇔ ρ ≤ 0. Note that if J ϕ (0) is reducible then at least one of the a i,j is equal to zero, for i, j ∈ [d]. Then ϕ has at most four zeros. These are the values: whenever they belong to R 2 + . Remark 3.5. By carefully reading the proof of Theorem 3.4, it appears that we have proved a little more than what is in the statement.
Indeed, in part 1. we have proved that if there exists a solution to the equation ϕ(λ) = 0 in (0, +∞) d , then it is unique and equal to φ(0). This is when J ϕ (0) is irreducible but we can see from the proof that this is also true when J ϕ (0) is reducible. Let us also notice that in the reducible case, there may exist solutions λ ∈ R d \ {0} with λ j = 0 for some j ∈ [d] as the above example shows. Moreover it can be derived from arguments in the proof of part 2. that when φ(0) > 0, for each direction v ∈ R d + , almost surely, there is at least one coordinate of the field X which goes to +∞.

On the distribution of the field (T r , X Tr )
Let us recall the definition of the matrix valued field X = {X t , t ∈ R d + } given in the beginning of Section 3. As already noticed, this field carries on the same information as the spaLf X. However, whereas the vector X Tr is deterministic on the set {T r ∈ R d + } (and is actually equal to −r), the matrix X Tr is random whenever d ≥ 2. From another point of view, the fact that the field r → (T r , X Tr ) has independent and stationary increments (see the next theorem) induces an analogy with fluctuation theory in dimension 1. More specifically, this bivariate field can be considered as the analogue of the scale process describing the fluctuations of any one dimensional Lévy process at its infimum. The aim of this section is to characterize the law of the field r → (T r , X Tr ), first through its Laplace exponent and then from a Kemperman's type identity relating its law to that of the field X.

Characterization through the Laplace transform
Recall that we denote by µ (j) the j-th column of the matrix µ = (µ i,j ) i,j∈ [d] . Then given a spaLf X we define the set and it is explicitly determined by Proof. Let us first note that the random field {M t , t ∈ R d [12]. Fix r = (r 1 , . . . , r d ) ∈ R d + and define the sequence of multivariate random times T n,r = (T (1) n,r , . . . , T (d) n,r ), n ≥ 1 by Then T r and T n,r , n ≥ 1 are stopping times of the filtration (F t ) t∈R d + in the sense of [12].
Moreover, for each i ∈ [d], the sequence (T n,r is a stopping time (see for instance the proof of Lemma (2.3) in [12]). Moreover, so that by Fatou's Lemma and the right continuity of {M t , t ∈ R d + }, we obtain as n tends to ∞, E M T (u) r ≤ 1. Then by applying Fatou's Lemma again, we obtain as each coordinate of u tends to ∞ that E M Tr 1 {Tr∈R d + } ≤ 1. It implies that for all (λ, µ) ∈ M ϕ , E e − λ,Tr − µ,X Tr 1 {Tr∈R d + } ≤ 1. Then we prove in the same way as for (3.5) in Proposition 3.1, that for all r, r ∈ R d + , where X is an independent copy of X and T is its first hitting time process. Recall that under assumption (H), P(T r ∈ R d + ) > 0 for all r ∈ R d + . The existence of the mapping Φ follows by using (4.3), in the same way as for the existence of the mapping φ in 3. of Proposition 3.1. (Note that in particular Φ(λ, 0) = φ(λ), λ ∈ R d + .) Then it is readily seen that (T r , X Tr ) = (r, X r ) + (T r+Xr ,XT r+Xr ) a.s. on {T r ∈ R d + }, (4.4) Spectrally positive additive Lévy fields whereX t = X r+t − X r andT k = inf{t ≥ 0 :X t = −k}. Since X is a spaLf, for all t ∈ R d + , X t has the same law as X t and is independent of {X s : s ≤ r}. Thus conditionally on {T r ∈ R d + },T r+Xr andXT r+Xr are independent of X r . Let (λ, µ) ∈ M ϕ , then using (4.4), we obtain x 1,j , . . . , x d,j . This equality can also be written as whereΦ(λ, µ) is the matrix whose all columns are equal to Φ(λ, µ). Thanks to the independence of the X (j) 's, the latter equality is reduced to As a consequence, the Laplace exponent Φ of (T r , X Tr ) satisfy (4.1). Now recall the definition of the Esscher transform X µ (j) ,(j) of each X (j) given after From these Esscher transforms we defined, see (3.8), the spaLf X µ by Then under assumption (H), from part 1. of Theorem 3.3 and from the absolute continuity relationship (3.9) between X and X µ , the set D µ is not empty. Moreover, thanks to Theorem 3.3, the Laplace exponent ϕ µ = Thus the Laplace exponent Φ of the couple (T r , X Tr ) exists and is given for all (λ, µ) ∈ M ϕ such that λ j > ϕ j (µ (j) ), j ∈ [d] by Φ(λ, µ) = φ µ (λ 1 − ϕ 1 (µ (1) ), . . . , λ d − ϕ d (µ (d) )). Finally this relation is extended to the whole set M ϕ by continuity.

An explicit form of the distribution
Let us define the set x is essentially nonnegative and x · 1 ≤ 0} endowed with some matrix norm, · and equipped with its Borel σ-field. From Theorem 4.1, the measure P(T r ∈ dt, X t ∈ dx)dr on R d The following result shows that this measure can be expressed only in terms of the law of the spaLf.
is the image of the measure det(−x) through the mapping (t, x) → (t, x, −x · 1).
When d = 1, the above identity can be read as and is known as Kemperman's identity for spectrally positive Lévy processes. It can be found in [3], see Proposition VII.2. We shall prove Theorem 4.3 through discrete approximation. As a first step, we need to recall the discrete time and space counterpart of spaLf's. Those are matrix valued fields of the form {S n , n ∈ Z d are independent random walks. Moreover, all coordinates S i,j start from 0 and take their values in k −1 Z, where k ≥ 1 is some integer which will be fixed until mentioned otherwise. For i = j they are non decreasing and for i = j they are downward skip free, that is S i,i n − S i,i n−1 ≥ −k −1 , for all n ≥ 1. This setting is introduced in [7] (for k = 1 and up to transposition of the matrix S). Equivalently to the continuous case, we define the field S := S · 1 and its first hitting time process see Lemma 2.2 in [7]. The field S (or equivalently S) will be called a downward skip free random field (dsfrf for short). An essential result for the proof of Theorem 4.3, is the following extension of the ballot theorem P(T S r = n, S n = x) = k d det(−x) n 1 . . . n d P(S n = x), (4.9) for all n ∈ N d and all essentially nonnegative matrix x of M d (k −1 Z) such that x · 1 = −r.
The next step is to consider lattice valued spaLf's. Let us first define these processes.
be a family of d independent d-dimensional Lévy processes such that for i = j, X i,j is non-decreasing k −1 Z-valued Lévy process and for each j ∈ [d], X j,j is a k −1 Z-valued Lévy process such that for all t > 0, X j,j t −X j,j t− ≥ −k −1 . Then there exists a dsfrf S as defined above and d independent Poisson processes N (j) , j ∈ [d] also independent of S such that The random fields {X t , t ∈ R d , t ∈ R d + } and X = X · 1 will be referred to as lattice valued spaLf's. Let (e We can easily check that the latter is related to the first hitting time process of S through the identity, (4.11) The following proposition is a direct consequence of (4.9). Although it can also be found in [6] for k = 1, we give a more direct proof here.
, t ∈ R d + } be a lattice valued spaLf. Then for fixed r ∈ k −1 Z d + , the joint law of (T r , X Tr ) is given by Proof. Let r and x = (x i,j ) i,j∈ [d] be as in the statement. Then the straightforward identity S T S r = X Tr together with expressions (4.10) and (4.11) allow us to write, which proves our result.
From now on, we will add k as a superscript to all objects referring to the discrete valued spaLf defined above. For instance, the latter will be denoted by X (k) = (X i,j,k ) i,j∈ [d] or X (k) , where X (j),k = t (X 1,j,k , . . . , X d,j,k ). It is pretty clear that lattice valued spaLf's satisfy analogous properties to those of spaLf's introduced in Section 3. In particular, the discrete time field r → (T ), r ∈ k −1 Z d + has independent and stationary increments and can be treated in a very similar way as its continuous space counterpart involved in Theorem 4.1. That is why we will content ourselves with stating the next theorem as well as some preliminary results without giving any proof.
Recall the definition of the Laplace exponent ϕ Then as in Theorem 3.3, we can prove that the hypothesis } is non empty is equivalent to the fact that T (k) r ∈ R d + holds with positive probability, for all r ∈ k −1 Z d + .
As in Theorem 3.3, the proof of this equivalence is based on the Esscher transform X (k),µ , for µ ∈ M d (R + ) whose Laplace exponent is given by (4.12) Let us define the set The following theorem is the analog of Theorem 4.1 for lattice valued spaLf's.  (4.13) and it is explicitly determined by ), (4.14) where φ (k),µ is the inverse of the Laplace exponent ϕ (k),µ of the Esscher transform X (k),µ recalled in (4.12).
In order to end the proof of Theorem 4.3, we need to prove that any spaLf is the weak limit of a sequence of lattice valued spaLf's. The index k is now a variable that will be taken to infinity.
Lemma 4.6. Let Y be a d-dimensional Lévy process whose all coordinates are spectrally positive. Then there exists a sequence of (k −1 Z) d -valued Lévy processes Y (k) which converges weakly in the J 1 Skohorod's topology toward Y. Moreover, the sequence (Y (k) ) can be chosen so that for each k, all coordinates of Y (k) take their values in the The proof of this lemma is transferred to the Appendix. We have now gathered all necessary ingredients for the proof of Theorem 4.3.
Proof of Theorem 4.3. Let (X (k) ) k≥1 be a sequence of lattice valued spaLf's such that each sequence of columns (X (j),k ) k≥1 , where X (j),k = t (X 1,j,k , . . . , X d,j,k ), converges weakly to X (j) . The existence of such a sequence is ensured by Lemma 4.6. This convergence means in particular that Since (H) is satisfied, by continuity of the functions ϕ j and from (4.15), there is k 0 such that for all k ≥ k 0 , (H (k) ) is satisfied. Then let k ≥ k 0 and let M d,r (k −1 Z) be the set of essentially nonnegative matrices x of M d (k −1 Z) such that x · 1 = −r. We derive from Theorem 4.5 that for all α ∈ R d + and (λ, µ) ∈ M (k) ϕ , (4.16) r k = k −1 ( kr 1 , . . . , kr d ) and where x denotes the lower integer part of x. Then by taking k to infinity in (4.16), we obtain from (4.6) that for all α ∈ R d + and (λ, µ) ∈ M ϕ such that λ j > ϕ j (µ (j) ), for all j ∈ [d], On the other hand, let M d (k −1 Z) be the set of essentially nonnegative matrices x of M d (k −1 Z) such that x · 1 ≤ 0. Then as a direct consequence of Proposition 4.4, we obtain that for all α ∈ R d + and (λ, Then it follows from the above calculation and from (4.17) that for all α ∈ R d + and (λ, µ) ∈ M ϕ such that λ j > ϕ j (µ (j) ), j ∈ [d], Now, we derive from the weak convergence of X so that for all ε > 0, Then from Proposition 4.4, which entails from a trivial extension of (4.18) that, therefore by dominated convergence, expression (4.19) can be made arbitrarily small as ε tends to 0. Then we have proved that the identity (4.7) is valid for all α ∈ R d + and (λ, µ) ∈ M ϕ such that λ j > ϕ j (µ (j) ), j ∈ [d]. Now let any (λ, µ) ∈ M ϕ and assume that λ i = ϕ i (µ (i) ) for some i ∈ [d]. Then identity (4.7) is valid if we replace λ i by λ i = λ i + ε i , for ε i > 0 and we obtain it for (λ, µ) by letting ε i going to 0 and applying monotone convergence.
Proof of Proposition 3.2. Assume first that d > 1. Then taking µ = 0 in Theorem 4.3 gives Note that from our assumptions the density p t : M d (R) → R of X t is continuous on the set of matrices whose columns belong to F 1 × F 2 × · · · × F d . Let M d (R) be the set of essentially nonnegative matrices whose elements of the diagonal are non-positive. Then is defined by d i,i = x i,i and d i,j = 0 for i = j, and x = (x i,j ) i,j∈ [d] such that x i,i = − j =i x i,j and x i,j = x i,j for i = j. Let I d be the identity matrix. Then where x i,i is the matrix obtained from x by deleting the row and the column i. From Exercise 1. in Chapter I of [3], the Lévy measure of the subordinator (T rei ) r≥0 is the vague limit of P(T r ∈ dt)/r as r tends to 0, on sets of the form {|t| > a}, a > 0. Hence the expression of the statement follows from continuity property of p t . The expression for d = 1 is obtained in the same way by using the simpler form (4.8) of P(T r ∈ dt) in this case.
We can prove by induction that t ≥ s (n) , for all n ≥ 1. Firstly for (A.1) to be satisfied, we should have t i ≥ inf{s : x i,i (s−) = −r i }, for all i ∈ [d] t , hence t ≥ s (1) . Now assume that t ≥ s (n) . Then [d] t ⊆ [d] s (n) and from (A.1), for each i ∈ [d] t , , so that t ≥ s (n+1) and the first assertion is proved.
If r ≤ r, then one can easily prove by induction that, with obvious notation, s (n) ≤ s (n) for all n ≥ 1 and the first part of assertion 2. follows. For the second part, set s := lim x i,j (u j −).
Since r ≥ r, it follows from 2. that the smallest solution s of the system (r , x) is such that s ≥ s. But since u is also a solution of (r , x), 1. implies u ≥ s and the first assertion of 3. follows. The second assertion of 3. is a consequence of the first one. Indeed, u < s implies that u ≥ s is not satisfied. x k,j (s j −) = −r k . Proof of Lemma 4.6. Let us first assume that Y has bounded variation. Then the characteristic exponent ψ of Y can be written as where a = (a 1 , . . . , a d ) ∈ R d and the Lévy measure π satisfies (0,∞) d (1 ∧ |x|) π(dx) < ∞.
Then recall that from Theorem 2.7 in [14], which can be extended in higher dimension, see Section 5 in the same paper, the weak convergence of the sequence of random variables (Y (k) 1 ) k≥1 toward Y 1 implies the weak convergence of the sequence of processes {(Y (k) t ) t≥0 , k ≥ 1} towards (Y t ) t≥0 in the J 1 Skohorod's topology. Hence our result is proved in the case where Y has bounded variation.
Let us now assume that Y is any Lévy process as described in the statement and set ∆ s = Y s − Y s− . Then it is well known that the sequence of processes converges weakly toward Y as n tends to ∞, see the proof of Theorem 1 of Chapter I in [3] and the above argument on weak convergence in the J 1 Skohorod's topology. Since, for each n, Z (n) is a Lévy process with bounded variation whose all coordinates have no negative jumps, in application of what has just been proved, there is a sequence of (k −1 Z) d -valued Lévy processes Z (n,k) , k ≥ 1 which converges weakly in the J 1 Skohorod's topology toward Z (n) . Moreover, for each k, all the coordinates of process Z (n,k) take their values in the set {−k −1 , 0, k −1 , 2k −1 , 3k −1 , . . . }. Then it suffices to set Y (k) := Z (k,k) in order to obtain the desired sequence.