First hitting time and place, monopoles and multipoles for pseudo-processes driven by the equation $\partial/\partial t = \pm\partial^N/\partial x^N$

Consider the high-order heat-type equation $\partial u/\partial t=\pm\partial^N u/\partial x^N$ for an integer $N>2$ and introduce the related Markov pseudo-process $(X(t))_{t\ge 0}$. In this paper, we study several functionals related to $(X(t))_{t\ge 0}$: the maximum $M(t)$ and minimum $m(t)$ up to time $t$; the hitting times $\tau_a^+$ and $\tau_a^-$ of the half lines $(a,+\infty)$ and $(-\infty,a)$ respectively. We provide explicit expressions for the distributions of the vectors $(X(t),M(t))$ and $(X(t),m(t))$, as well as those of the vectors $(\tau_a^+,X(\tau_a^+))$ and $(\tau_a^-,X(\tau_a^-))$.


Introduction
Let N be an integer greater than 2 and consider the high-order heat-type equation where κ N = (−1) 1+N/2 if N is even and κ N = ±1 if N is odd. Let p(t; z) be the fundamental solution of Eq. (1.1) and put p(t; x, y) = p(t; x − y).
The function p is characterized by its Fourier transform With Eq. (1.1) one associates a Markov pseudo-process (X(t)) t 0 defined on the real line and governed by a signed measure P, which is not a probability measure, according to the usual rules of ordinary stochastic processes: P x {X(t) ∈ dy} = p(t; x, y) dy and for 0 = t 0 < t 1 < · · · < t n , x 0 = x, Relation (1.2) reads, by means of the expectation associated with P, E x e iuX(t) = e iux+κ N t(iu) N .
Such pseudo-processes have been considered by several authors, especially in the particulary cases N = 3 and N = 4. The case N = 4 is related to the biharmonic operator ∂ 4 /∂x 4 . Few results are known in the case N > 4. Let us mention that for N = 2, the pseudo-process considered here is a genuine stochastic process (i.e., driven by a genuine probability measure), this is the most well-known Brownian motion.
The following problems have been tackled: • Analytical study of the sample paths of that pseudo-process: Hochberg [8] defined a stochastic integral (see also Motoo [14] in higher dimension) and proposed an Itô formula based on the correspondence dx 4 = dt, he obtained a formula for the distribution of the maximum over [0, t] in the case N = 4 with an extension to the even-order case. Noteworthy, the sample paths do not seem to be continuous in the case N = 4; • Study of the sojourn time spent on the positive half-line up to time t, T (t) = meas{s ∈ [0, t] : X(s) > 0} = t 0 1l {X(s)>0} ds: Krylov [11], Orsingher [20], Hochberg and Orsingher [9], Nikitin and Orsingher [16], Lachal [12] explicitly obtained the distribution of T (t) (with possible conditioning on the events {X(t) > (or =, or <)0}). Sojourn time is useful for defining local times related to the pseudo-process X, see Beghin and Orsingher [1]; • Study of the maximum and the minimum functionals Hochberg [8], Beghin et al. [2,3], Lachal [12] explicitly derived the distribution of M (t) and that of m(t) (with possible conditioning on some values of X(t)); • Study of the couple (X(t), M (t)): Beghin et al. [20] wrote out several formulas for the joint distribution of X(t) and M (t) in the cases N = 3 and N = 4; • Study of the first time the pseudo-process (X(t)) t 0 overshoots the level a > 0, τ + a = inf{t 0 : X(t) > a}: Nishioka [17,18], Nakajima and Sato [15] adopt a distributional approach (in the sense of Schwartz distributions) and explicitly obtained the joint distribution of τ + a and X(τ + a ) (with possible drift) in the case N = 4. The quantity X(τ + a ) is the first hitting place of the half-line [a, +∞). Nishioka [19] then studied killing, reflecting and absorbing pseudo-processes; • Study of the last time before becoming definitively negative up to time t, O(t) = sup{s ∈ [0, t] : X(s) > 0}: Lachal [12] derived the distribution of O(t); • Study of Equation (1.1) in the case N = 4 under other points of view: Funaki [6], and next Hochberg and Orsingher [10] exhibited relationships with compound processes, namely iterated Brownian motion, Benachour et al. [4] provided other probabilistic interpretations. See also the references therein.
This aim of this paper is to study the problem of the first times straddling a fixed level a (or the first hitting times of the half-lines (a, +∞) and (−∞, a)): τ + a = inf{t 0 : X(t) > a}, τ − a = inf{t 0 : X(t) < a} with the convention inf(∅) = +∞. In the spirit of the method developed by Nishioka in the case N = 4, we explicitly compute the joint "signed-distributions" (we simply shall call "distributions" throughout the paper for short) of the vectors (X(t), M (t)) and (X(t), m(t)) from which we deduce those of the vectors (τ + a , X(τ + a )) and (τ − a , X(τ − a )). The method consists of several steps: • Defining a step-process by sampling the pseudo-process (X(t)) t 0 on dyadic times t n,k = k/2 n , k ∈ N; • Observing that the classical Spitzer identity holds for any signed measure, provided the total mass equals one, and then using this identity for deriving the distribution of (X(t n,k ), max 0 j k X(t n,j )) through its Laplace-Fourier transform by means of that of X(t n,k ) + where x + = max(x, 0); • Expressing time τ + a (for instance) related to the sampled process (X(t n,k )) k∈N by means of (X(t n,k ), max 0 j k X(t n,j )); • Passing to the limit when n → +∞.
Meaningfully, we have obtained that the distributions of the hitting places X(τ + a ) and X(τ − a ) are linear combinations of the successive derivatives of the Dirac distribution δ a . In the case N = 4, Nishioka [17] already found a linear combination of δ a and δ ′ a and called each corresponding part "monopole" and "dipole" respectively, considering that an electric dipole having two opposite charges δ a+ε and δ a−ε with a distance ε tending to 0 may be viewed as one monopole with charge δ ′ a . In the general case, we shall speak of "multipoles".
Nishioka [18] used precise estimates for carrying out the rigorous analysis of the pseudoprocess corresponding to the case N = 4. The most important fact for providing such estimates is that the integral of the density p is absolutely convergent. Actually, this fact holds for any even integer N . When N is an odd integer, the integral of p is not absolutely convergent and then similar estimates may not be obtained; this makes the study of X very much harder in this case. Nevertheless, we have found, formally at least, remarkable formulas which agree with those of Beghin et al. [2,3] in the case N = 3. They obtained them by using a Feynman-Kac approach and solving differential equations. We also mention some similar differential equations for any N . So, we guess our formulas should hold for any odd integer N 3. Perhaps a distributional definition (in the sense of Schwartz distributions since the heat-kernel is locally integrable) of the pseudo-process X might provide a properly justification to comfirm our results. We shall not tackle this question here.
The paper is organized as follows: in Section 2, we write down general notations and recall some known results. In Section 3, we construct the step-process deduced from (X(t)) t 0 by sampling this latter on dyadic times. Section 4 is devoted to the distributions of the vectors (X(t), M (t)) and (X(t), m(t)) with the aid of Spitzer identity. Section 5 deals with the distributions of the vectors (τ + a , X(τ + a )) and (τ − a , X(τ − a )) which can be expressed by means of those of (X(t), M (t)) and (X(t), m(t)). Each section is completed by an illustration of the displayed results therein to the particular cases N ∈ {2, 3, 4}.
We finally mention that the most important results have been announced, without details, in a short Note [13].

Settings
The relation +∞ −∞ p(t; ξ) dξ = 1 holds for all integers N . Moreover, if N is even, the integral is absolutely convergent (see [12]) and we put Notice that ρ does not depend on t since p(t; ξ) = t −1/N p(1; ξ/t 1/N ). For odd integer N , the integral of p is not absolutely convergent; in this case ρ = +∞.

N th roots of κ N
We shall have to consider the N th roots of κ N (θ l for 0 l N − 1 say) and distinguish the indices l such that ℜθ l < 0 and ℜθ l > 0 (one never has ℜθ l = 0). So, let us introduce the following set of indices The numbers of elements of the sets J and K are #J = #K = p.
If N = 2p + 1, two cases must be considered: The numbers of elements of the sets J and K are #J = p and #K = p + 1 if p is even, #J = p + 1 and #K = p if p is odd. Figure 1 illustrates the different cases.

Recalling some known results
We recall from [12] the expressions of the kernel p(t; ξ) together with its Laplace transform (the so-called λ-potential of the pseudo-process (X(t)) t 0 ), for λ > 0, We also recall (see the proof of Proposition 4 of [12]): We recall the expressions of the distributions of M (t) and m(t) below.
• Concerning the densities: The following result will be used further: expanding into partial fractions yields, for any polynomial P of degree deg P #J, (2.9) • Applying (2.9) to x = 0 and P = 1 gives j∈J A j = k∈K B k = 1. Actually, the A j 's and B k 's are solutions of a Vandermonde system (see [12]).
• Applying (2.9) to x = θ k , k ∈ K, and P = 1 gives which simplifies, by (2.8), into (and also for the B k 's) (2.10) • Applying (2.9) to P = x p , p #J, gives, by observing that 1/θ j =θ j , Step-process In this part, we proceed to sampling the pseudo-process X = (X(t)) t 0 on the dyadic times t n,k = k/2 n , k, n ∈ N and we introduce the corresponding step-process X n = (X n (t)) t 0 defined for any n ∈ N by The quantity X n is a function of discrete observations of X at times t n,k , k ∈ N.
For the convenience of the reader, we recall the definitions of tame functions, functions of discrete observations, and admissible functions introduced by Nishioka [18] in the case N = 4. Definition 3.1 Fix n ∈ N. A tame function is a function of a finite number of observations of the pseudo-process X at times t n,j , 1 j k, that is a quantity of the form F n,k = F (X(t n,1 ), . . . , X(t n,k )) for a certain k and a certain bounded Borel function F : R k −→ C. The "expectation" of F n,k is defined as We plainly have the inequality Definition 3.2 Fix n ∈ N. A function of the discrete observations of X at times t n,k , k 1, is a convergent series of tame functions: F Xn = ∞ k=1 F n,k where F n,k is a tame function for all k 1. Assuming the series ∞ k=1 |E x (F n,k )| convergent, the "expectation" of F Xn is defined as The definition of the expectation is consistent in the sense that it does not depend on the representation of F Xn as a series (see [18]

Definition 3.3
An admissible function is a functional F X of the pseudo-process X which is the limit of a sequence (F Xn ) n∈N of functions of discrete observations of X: such that the sequence (E x (F Xn )) n∈N is convergent. The "expectation" of F X is defined as This definition eludes the difficulty due to the lack of σ-additivity of the signed measure P.
On the other hand, any bounded Borel function of a finite number of observations of X at any times (not necessarily dyadic) t 1 < · · · < t k is admissible and it can be seen that, according to Definitions 3.1, 3.2 and 3.3, as expected in the usual sense.
For any pseudo-process Z = (Z(t)) t 0 , consider the functional defined for λ ∈ C such that ℜ(λ) > 0, µ ∈ R, ν > 0 by where H Z , K Z , I Z are functionals of Z defined on [0, +∞), K Z being non negative and I Z bounded; we suppose that, for all t 0, H Z (t), K Z (t), I Z (t) are functions of the continuous observations Z(s), 0 s t (that is the observations of Z up to time t). For Z = X n , we have e −λt n,k +iµH Xn (t n,k )−νK Xn (t n,k ) I Xn (t n,k ).
Since H Xn (t n,k ), K Xn (t n,k ), I Xn (t n,k ) are functions of X n (t n,j ) = X(t n,j ), 0 j k, the quantity e iµH Xn (t n,k )−νK Xn (t n,k ) I Xn (t n,k ) is a tame function and the series in (3.2) is a function of discrete observations of X. If the series ∞ k=0 E x e −λt n,k +iµH Xn (t n,k )−νK Xn (t n,k ) I Xn (t n,k ) converges, the expectation of F Xn (λ, µ, ν) is defined, according to Definition 3.2, as Finally, if lim n→+∞ F Xn (λ, µ, ν) = F X (λ, µ, ν) and if the limit of E x [F Xn (λ, µ, ν)] exists as n goes to ∞, F X (λ, µ, ν) is an admissible function and its expectation is defined, according to Definition 3.3, as 4 Distributions of (X(t), M(t)) and (X(t), m(t)) We assume that N is even. In this section, we derive the Laplace-Fourier transforms of the vectors (X(t), M (t)) and (X(t), m(t)) by using Spitzer identity (Subsection 4.1), from which we deduce the densities of these vectors by successively inverting-three times-the Laplace-Fourier transforms (Subsection 4.2). Next, we write out the formulas corresponding to the particular cases N ∈ {2, 3, 4} (Subsection 4.3). Finally, we compute the distribution functions of the vectors (X(t), m(t)) and (X(t), M (t)) (Subsection 4.4) and write out the formulas associated with N ∈ {2, 3, 4} (Subsection 4.5). Although N is assumed to be even, all the formulas obtained in this part when replacing N by 3 lead to some well-known formulas in the literature.
The functional F + Xn (λ, µ, ν) is a function of discrete observations of X. Our aim is to compute its expectation, that is to compute the expectation of the above series and next to take the limit as n goes to infinity. For this, we observe that, using the Markov property, So, if ℜ(λ) > 2 n ln ρ, the series E x e −λt n,k +iµX n,k −νM n,k is absolutely convergent and then we can write the expectation of F + Xn (λ, µ, ν): e −λt n,k ϕ + n,k (µ, ν; x) for ℜ(λ) > 2 n ln ρ (4.2) with ϕ + n,k (µ, ν; x) = E x e iµX n,k −νM n,k = e (iµ−ν)x E 0 e −(ν−iµ)M n,k −iµ(M n,k −X n,k ) .
However, because of the domain of validity of (4.2), we cannot take directly the limit as n tends to infinity. Actually, we shall see that this difficulty can be circumvented by using sharp results on Dirichlet series.

• Step 2
Putting z = e −λ/2 n and noticing that e −λt n,k = z k , (4.2) writes The generating function appearing in the last displayed equality can be evaluated thanks to an extension of Spitzer identity, which we recall below.
Lemma 4.2 Let (ξ k ) k 1 be a sequence of "i.i.d. random variables" and set X 0 = 0, X k = k j=1 ξ j for k 1, and M k = max 0 j k X j for k 0. The following relationship holds for |z| < 1: Observing that 1 − z = exp[log(1 − z)] = exp[− ∞ k=1 z k /k], Lemma 4.2 yields, for ξ k = X n,k − X n,k−1 : We plainly have |ψ + (µ, ν; t)| 2ρ, and then the series in (4.3) defines an analytical function on the half-plane {λ ∈ C : ℜ(λ) > 0}. It is the analytical continuation of the function which was a priori defined on the half-plane {λ ∈ C : ℜ(λ) > 2 n ln ρ}. As a byproduct, we shall use the same notation E x [F + Xn (λ, µ, ν)] for ℜ(λ) > 0. We emphasize that the rhs of (4.3) involves only one observation of the pseudo-processus X (while the lhs involves several discrete observations). This important feature of Spitzer identity entails the convergence of the series lying in (4.2) with a lighter constraint on the domain of validity for λ.

• Step 3
In order to prove that the functional F + X (λ, µ, ν) is admissible, we show that the series E x e −λt n,k +iµX n,k −νM n,k is absolutely convergent for ℜ(λ) > 0. For this, we invoke a lemma of Bohr concerning Dirichlet series ( [5]). Let α k e −β k λ be a Dirichlet series of the complex variable λ, where (α k ) k∈N is a sequence of complex numbers and (β k ) k∈N is an increasing sequence of positive numbers tending to infinity. Let us denote σ c its abscissa of convergence, σ a its abscissa of absolute convergence and σ b the abscissa of boundedness of the analytical continuation of its sum. If the condition lim sup k→∞ ln(k)/β k = 0 is fulfilled, then σ c = σ a = σ b .
In our situation, we will show that the function of the variable λ lying in the rhs in (4.3) is bounded on each half-plane ℜ(λ) ε for any ε > 0. We write it as For any α ∈ C such that ℜ(α) 0, we have where we set ̺ = +∞ 0 ξ |p(1; −ξ)| dξ (̺ < +∞) and we used the elementary inequality |e ζ − 1| 2|ζ| which holds for any ζ ∈ C such that ℜ(ζ) 0. Similarly, This proves that the rhs of (4.3) is bounded on each half-plane ℜ(λ) ε for any ε > 0. So, the convergence of the series lying in (4.2) holds in the domain ℜ(λ) > 0 and the Now, we can pass to the limit when n → +∞ in (4.3) and we obtain A similar formula holds for F − X .
From (4.4), we see that we need to evaluate integrals of the form In the last step, we used the fact that the set {θ j , j ∈ J} is invariant by conjugating.
In a similar manner, we can obtain that of (X(t), m(t)). The proof of Theorem 4.1 is now completed.

Remark 4.3
Any of both formulas (4.1) can be deduced from the other one by using a symmetry argument.
• For even integers N , the obvious symmetry property X dist = −X holds and entails Observing that in this case which confirms the simple relationship between both expectations (4.1).
• If N is odd, although this case is not recovered by (4.1), it is interesting to note the asymmetry property X + dist = −X − and X − dist = −X + where X + and X − are the pseudo-processes respectively associated with κ N = +1 and κ N = −1. This would give Observing that now, with similar notations, Hence (X + (t), m + (t)) and (X − (t), −M − (t)) should have identical distributions, which would explain the relationship between both expectations (4.1) in this case.
Remark 4.4 By choosing ν = 0 in (4.1), we obtain the Fourier transform of the λ-potential of the kernel p. In fact, remarking that

Density functions
We are able to invert the Laplace-Fourier transforms (4.1) with respect to µ and ν.

Inverting with respect to ν
and, for z x, Applying expansion (2.11) to x = (iµ − ν)/ N √ λ yields: Writing now We can therefore invert the foregoing Laplace transform with respect to ν and we get the formula (4.9) corresponding the case of the maximum functional. That corresponding in the case of the minimum functional is obtained is a similar way.

Inverting with respect to µ
Theorem 4.6 The Laplace transforms with respect to time t of the joint density of X(t) and, respectively, M (t) and m(t), are given, for z x ∨ y, by and, for z x ∧ y, where the functions ϕ λ and ψ λ are defined by (2.6).
Proof. Let us write the following equality, as in the previous subsubsection (see (4.10)): We get, by (4.9) and (2.1), for z x, Writing now This proves (4.11) in the case of the maximum functional and the formula corresponding to the minimum functional can be proved in a same manner.
Remark 4.7 Formulas (4.11) contain in particular the Laplace transforms of X(t), M (t) and m(t) separately. As a verification, we integrate (4.11) with respect to y and z separately.
• By integrating with respect to y on [z, +∞) for z x, we get We used the relation j∈J A j = 1; see Subsection 2.3. We retrieve the Laplace transform (2.5) of the distribution of m(t).
• Suppose for instance that x y. Let us integrate (4.11) now with respect to z on (−∞, x]. This gives where we used (2.10) in the last step. We retrieve the λ-potential (2.3) of the pseudo-process (X(t)) t 0 since Remark 4.8 Consider the reflected process at its maximum (M (t) − X(t)) t 0 . The joint distribution of (M (t), M (t)−X(t)) writes in terms of the joint distribution of (X(t), M (t)), for x = 0 (set P = P 0 for short) and z, ζ 0, as: By introducing an exponentially distributed time T λ with parameter λ which is independent of (X(t)) t 0 , (4.12) reads This relationship may be interpreted by saying that −m(T λ ) and M (T λ ) − X(T λ ) admit the same distribution and M (T λ ) and M (T λ ) − X(T λ ) are independent.

Remark 4.9
The similarity between both formulas (4.11) may be explained by invoking a "duality" argument. In effect, the dual pseudo-process (X * (t)) t 0 of (X(t)) t 0 is defined by X * (t) = −X(t) for all t 0 and we have the following equality related to the inversion of the extremities (see [12]): Remark 4.10 Let us expand the function ϕ λ as λ → 0 + : Similarly As a result, putting (4.13) and (4.14) into (4.11) and using (2.1) and N −1 By integrating this asymptotic with respect to z, we derive the value of the so-called 0-potential of the absorbed pseudo-process (see [19] for the definition of several kinds of absorbed or killed pseudo-processes):

Inverting with respect to λ
Formulas (4.11) may be inverted with respect to λ and an expression by means of the successive derivatives of the kernel p may be obtained for the densities of (X(t), M (t)) and (X(t), m(t)). However, the computations and the results are cumbersome and we prefer to perform them in the case of the distribution functions. They are exhibited in Subsection 4.4.

Density functions: particular cases
In this subsection, we pay attention to the cases N ∈ {2, 3, 4}. Although our results are not justified when N is odd, we nevertheless retrieve well-known results in the literature related to the case N = 3. In order to lighten the notations, we set for, ℜ(λ) > 0, and then Example 4.12 Case N = 3: referring to Example 2.2, we have

Distribution functions
In this part, we integrate (4.11) in view to get the distribution function of the vector (X(t), M (t)): Obviously, if x > z, this quantity vanishes. Suppose now x z. We must consider the cases y z and z y. In the latter, we have P x {X(t) y, M (t) z} = P{M (t) z} and this quantity is given by (2.7). So, we assume that z x ∨ y. Actually, the quantity P x {X(t) y, M (t) z} is easier to derive.
Proposition 4.14 We have for z x ∨ y and ℜ(λ) > 0: and for z x ∧ y: As a result, combining the above formulas with (4.15), the distribution function of the couple (X(t), M (t)) emerges and that of (X(t), m(t)) is obtained in a similar way.

Theorem 4.15
The distribution functions of (X(t), M (t)) and (X(t), m(t)) are respectively determined through their Laplace transforms with respect to t by if y x z, and if z x y,

Inverting the Laplace transform
Theorem 4. 16 The distribution function of (X(t), M (t)) admits the following representation: where I k0 is given by (5.14) and the α jm 's being some coefficients given by (4.18).
Proof. We intend to invert the Laplace transform (4.16). For this, we interpret both exponentials e θ j N √ λ(x−z) and e θ k N √ λ(z−y) as Laplace transforms in two different manners: one is the Laplace transform of a combination of the successive derivatives of the kernel p, the other one is the Laplace transform of a function which is closely related to the density of some stable distribution. More explicitly, we proceed as follows.
• On one hand, we start from the λ-potential (2.3) that we shall call Φ: Differentiating this potential (#J − 1) times with respect to ξ leads to the Vandermonde system of #J equations where the exponentials e θ j N √ λ ξ are unknown: Introducing the solutions α jm of the #J elementary Vandermonde systems (indexed by m varying from 0 to #J − 1): The explicit expression of α jm is where the coefficients c jq , 0 q #J − 1, are the elementary symmetric functions of the θ l 's, l ∈ J \ {j}, that is c j0 = 1 and for 1 q #J − 1, • On the other hand, using the Bromwich formula, the function ξ −→ e θ k N √ λ ξ can be written as a Laplace transform. Indeed, referring to Section 5.2.2, we have for k ∈ K and ξ 0, where I k0 is given by (5.14).
Consequently, the sum lying in Proposition 4.14 may be written as a Laplace transform which gives the representation (4.17) for the the distribution function of (X(t), M (t)).

Remark 4.17
A similar expression obtained by exchanging the roles of the indices j and k in the above discussion and slightly changing the coefficient a km into another b jn may be derived: However, the foregoing result involves the same number of integrals as that displayed in Theorem 4.16.

Distribution functions: particular cases
Here, we write out (4.16) and (4.17) if y x z, Formula (4.17) writes The reciprocal relations, which are valid for ξ 0, imply that α 10 = 1. Then a 00 = 2B 0 On the other hand, we have for ξ 0, by (5.14), Consequently, Using the substitution σ = s 2 u+s together with a well-known integral related to the modified Bessel function K 1/2 , we get Then Finally, it can be easily checked, by using the Laplace transform, that As a result, we retrieve the famous reflection principle for Brownian motion: Example 4.19 Case N = 3: we have to cases to distinguish.

Boundary value problem
In this part, we show that the function x −→ F λ (x, y, z) solves a boundary value problem related to the differential operator D x = κ N d N dx N . Fix y < z and set F (x) = F λ (x, y, z) for x ∈ (−∞, z].

Proposition 4.21
The function F satisfies the differential equation together with the conditions Proof. The differential equation (4.20) is readily obtained by differentiating (4.16) with respect to x. Let us derive the boundary condition (4.21): where the last equality comes from (2.11) with x = θ k which yields Condition (4.22) is quite easy to check. 5 Distributions of (τ + a , X(τ + a )) and (τ − a , X(τ − a )) The integer N is again assumed to be even. Recall we set τ + a = inf{t 0 : X(t) > a} and τ − a = inf{t 0 : X(t) < a}. The aim of this section is to derive the distributions of the vectors (τ + a , X(τ + a )) and (τ − a , X(τ − a )). For this, we proceed in three steps: we first compute the Laplace-Fourier transform of, e.g., (τ + a , X(τ + a )) (Subsection 5.1); we next invert the Fourier transform (with respect to µ, Subsubsection 5.2.1) and we finally invert the Laplace transform (with respect to λ, Subsubsection 5.2.2). We have especially obtained a remarkable formula for the densities of X(τ + a ) and X(τ − a ) by means of multipoles (Subsection 5.4).

• Step 1
For the step-process (X n (t)) t 0 , the corresponding first hitting time τ + a,n is the instant t n,k with k such that X(t n,j ) a for all j ∈ {0, . . . , k−1} and X(t n,k ) > a, or, equivalently, such that M n,k−1 a and M n,k > a where M n,k = max 0 j k X n,j and X n,k = X(t n,k ) for k 0 and M n,−1 = −∞. We have, for x a, e −λt n,k +iµX n,k − e −λt n,k+1 +iµX n,k+1 1l {M n,k >a} .
The functional e −λτ + a,n +iµXn(τ + a,n ) is a function of discrete observations of X.
Noticing that 1 2iπ we get e −λt n,k t n,k ψ 1 (iµ; t n,k ) By imitating the method used by Nishioka (Appendix in [18]) for deriving subtil extimates, it may be seen that this last expression is bounded over the half-plane ℜ(λ) ε for any ε > 0. Hence, as in the proof of the validity of (4.2) for ℜ(λ) > 0, we see that (5.3) is also valid for ℜ(λ) > 0. It follows that the functional e −λτ + a +iµX(τ + a ) is admissible.

• Step 5
Now, we can let n tend to +∞ in (5.3). For ℜ(λ) > 0, we obviously have and we finally obtain the relationship (5.1) corresponding to τ + a . The proof of that corresponding to τ − a is quite similar.

Remark 5.3
Choosing µ = 0 in (5.4) supplies the Laplace transforms of τ + a and τ − a : for x a, Remark 5.4 An alternative method for deriving the distribution of (τ + a , X(τ + a )) consists of computing the joint distribution of X(t), 1l (−∞,a) (M (t)) instead of that of (X(t), M (t)) and next to invert a certain Fourier transform. This way was employed by Nishioka [18] in the case N = 4 and may be applied to the general case mutatis mutandis.
Remark 5.5 The following relationship issued from fluctuation theory holds for Levy processes: if x a, Let us check that (5.7) also holds, at least formally, for the pseudo-process X. We have, by (2.5), For x = a, this yields, by (2.11), As a result, by plugging (5.8) and (5.9) into (5.7), we retrieve (5.4).
Example 5.6 Case N = 2: we simply have Example 5.7 Case N = 3: • In the case κ 3 = +1, we have, for x a, and, for x a, Therefore, (5.4) writes • In the case κ 3 = −1, we similarly have that Example 5.8 Case N = 4: we have, for x a, and, for x a, Therefore, (5.4) becomes We retrieve formula (8.3) of [18].

Density functions
We invert the Laplace-Fourier transform (5.4). For this, we proceed in two stages: we first invert the Fourier transform with respect to µ and next invert the Laplace transform with respect to λ.

Inverting with respect to µ
Let us expand the product l∈J\{j} 1 −θ l x as where the coefficients c jq , 0 q #J − 1, are the elementary symmetric functions of the θ l 's, l ∈ J \ {j}, that is, more explicitly, c j0 = 1 and for 1 q #J − 1, In a similar way, we also introduce d k0 = 1 and for 1 q #K − 1, By applying expansion (5.10) to x = iµ/ N √ λ, we see that (5.4) can be rewritten as Now, observe that (−iµ) q e iµa is nothing but the Fourier transform of the q th derivative of the Dirac distribution viewed as a tempered Schwartz distribution: Hence, we have obtained the following intermediate result for the distribution of (τ + a , X(τ + a )) and also for that of (τ − a , X(τ − a )).
Proposition 5. 9 We have, for ℜ(λ) > 0, The appearance of the successive derivatives of δ a suggests to view the distribution of (τ + a , X(τ + a )) as a tempered Schwartz distribution (that is a Schwartz distribution acting on the space S of the C ∞ -functions exponentially decreasing together with their derivatives characterized by )) .

Inversion with respect to λ
In order to extract the densities of (τ + a , X(τ + a )) and (τ − a , X(τ − a )) from (5.12), we search functions I lq , 0 q max(#I − 1, #J − 1), such that, for ℜ(θ l ξ) < 0, The rhs of (5.13) seems closed to the Laplace transform of the probability density function of a completely asymmetric stable random variable, at least for q = 0. Nevertheless, because of the presence of the complex term θ l within the rhs of (5.13), we did not find any precise relationship between the function I lq and stable processes. So, we derive below an integral representation for I lq .
Invoking Bromwich formula, the function I lq writes The substitution λ −→ λ N yields In particular, for q = 0 we have, by integration by parts, Remark 5.10 The following relation holds between all the functions I lq 's: Hence, (5.12) can be rewritten as an explicit Laplace transform with respect to λ: We are able to state the main result of this part.
So, the sum lying within the second integral in (5.16) writes As a result, In particular, J q (t; ξ) is real and for q = 0 we have, since c j0 = 1 and j∈J A j = 1, which is nothing but P x {τ + a ∈ dt}/dt.

Distribution of the hitting places
We now derive the distribution of the hitting places X(τ + a ) and X(τ − a ). To do this for X(τ + a ) for example, we integrate (5.15) with respect to t: We need two lemmas for carrying out the integral lying in (5.17). Proof. We proceed by induction on n. The foregoing integral involves the elementary integral below: a j log b j which proves Lemma 5.13 in the case n = 1.
Assume now the result of the lemma valid for an integer n 1. Let m be an integer such that m n+2 and a 1 , . . . , a m and b 1 , . . . , b m be complex numbers such that ℜ(b j ) 0 and ℑ m j=1 a j b l j = 0 for 0 l n. By integration by parts, we have Applying L'Hôpital's rule n times, we see, using the condition ℑ m j=1 a j b l j = 0 for We have ℑ Lemma 5.14 We have, for 0 p q #J − 1, Proof. Consider the following polynomial: We then obtain, due to (2.11), if p #J − 1, which entails the result by identifying the coefficients of both polynomials above. Now, we state the following remarkable result.

Theorem 5.15
The "distributional densities" of X(τ + a ) and X(τ − a ) are given by It is worth that the distributions of X(τ + a ) and X(τ − a ) are linear combinations of the successive derivatives of the Dirac distribution δ a . This noteworthy fact has already been observed by Nishioka [17,18] in the case N = 4 and the author spoke of "monopoles" and "dipoles" respectively related to δ a and δ ′ a (see also [19] for more account about relationships between monopoles/dipoles and different kinds of absorbed/killed pseudoprocesses). More generally, (5.18) suggests to speak of "multipoles" related to the δ (q) a 's.
In the case of Brownian motion (N = 2), the trajectories are continuous, so X(τ ± a ) = a and then we classically write P x {X(τ ± a ) ∈ dz} = δ a (dz) where δ a is viewed as the Dirac probability measure. For N 4, it emerges from (5.18) that the distributional densities of X(τ ± a ) are concentrated at the point a through a sequence of successive derivatives of δ a where δ a is now viewed as a Schwartz distribution. Hence, we could guess in (5.18) a curious and unclear kind of continuity. In Subsection 5.6, we study the distribution of X(τ ± a −) which will reveal itself to coincide with that of X(τ ± a ). This will confirm this idea of continuity.
Proof. Let us evaluate the integral lying in (5.17). We have, thanks to Lemma 5.14, Therefore, the conditions of Lemma 5.13 are fulfilled and we get The second sum lying within the brackets is equal, by Lemma 5.14, to (−1) q . The first one vanishes: indeed, by using the symmetry σ : j ∈ J −→ σ(j) ∈ J such that θ σ(j) =θ j , ℜ j∈J c jq A j θ q j arg(θ j ) = 1 2 j∈Jc jq A j θ q j arg(θ j ) + j∈J c jqĀjθ q j arg(θ j ) The terms of the second last sum are the opposite of those of the first sum since c σ(j)qĀσ(j)θ q σ(j) =c jq A j θ q j and arg(θ σ(j) ) = − arg(θ j ) which proves the assertion. As a result, we get (5.18).

Fourier transforms of the hitting places
By using (5.18) and (5.11), it is easy to derive the Fourier transforms of the hitting places X(τ + a ) and X(τ − a ).

Proposition 5.16
The Fourier transforms of X(τ + a ) and X(τ − a ) are given by In this part, we suggest to retrieve (5.19) by letting λ tend to 0 + in (5.4). We rewrite (5.4), for instance for x a, as Using the elementary expansions, as λ → 0 + , On the other hand, applying (2.11) to x = 0 gives Consequently, the limit of E x e −λτ + a +iµX(τ + a ) as λ → 0 + ensues. The constant arising when combining (5.20) and (5.21) is In view of (5.19), we have proved the equality

Strong Markov property for τ ± a
We roughly state a strong Markov property related to the hitting times τ ± a .
Taking the expectations, we get for x a: E x F (X n (t)) 0 t<τ + a,n G (X n (t + τ + a,n )) t 0 = ∞ k=1 E x F n,k−1 1l {M n,k−1 <a M n,k } E X n,k (G n,0 ) = E x F (X n (t)) 0 t<τ + a,n E X(τ + a,n ) [G((X n (t)) t 0 )] (5.24) and (5.22) ensues by taking the limit of (5.24) as n tends to +∞ in the sense of Definition 3.3.
The argument in favor of discontinuity evoked in [12] should fail since, in view of (5.13), a term is missing when applying the strong Markov property.

Just before the hitting time
In order to lighten the notations, we simply write τ ± a = τ a and we introduce the jump ∆ a X = X(τ a ) − X(τ a −).

Proposition 5.19
The Laplace-Fourier transform of the vector (τ a , X(τ a −), ∆ a X) is related to those of the vectors (τ a , X(τ a −)) and (τ a , X(τ a )) according as, for ℜ(λ) > 0 and µ, ν ∈ R, E x e −λτa+iµX(τa−)+iν∆aX = E x e −λτa+iµX(τa−) = E x e −λτa+iµX(τa) . (5.25) Proof. The proof of Proposition 5.19 is similar to that of Lemma 5.1. So, we outline the main steps with less details. We consider only the case where τ a = τ + a and x a, the other one is quite similar.
For computing the term within brackets, we need the following quantities: • Step 3 We now take the limit of (5.27) as n tends to infinity:   for p = 0, 0 for 1 p N − 1, κ N N ! t for p = N .
Proof. By differentiating k times the identity E 0 e iuX(t) = e κ N (iu) N t with respect to u and next substituting u = 0, we have that .
Fix a complex number α = 0. It can be easily seen by induction that there exists a family of polynomials (P k ) k∈N such that, for all k ∈ N, ∂ k ∂u k e αu N = P k (u) e αu N . (5.29) In particular, we have P 0 (u) = 1 and P 1 (u) = N αu N −1 . Using the Leibniz rule, we obtain This ascertains the aforementioned induction and gives, for u = 0, Choosing α = κ N i N t and u = 0 in (5.29), we immediately complete the proof of Lemma 5.20.

Boundary value problem
We end up this work by exhibiting a boundary value problem satisfied by the Laplace-Fourier transform U (x) = E x e −λτ + a +iµX(τ + a ) , x ∈ (−∞, a).

Proposition 5.25
The function U satisfies the differential equation We also refer the reader to [19] for a very detailed account on PDE's with various boundary conditions and their connections with different kinds of absorbed/killed pseudoprocesses.