Distribution of Shifted Discrete Random Walk and Vandermonde matrices

In this work we set up the generating function of the ultimate time survival probability $\varphi(u+1)$, where $$\varphi(u)=\mathbb{P}\left(\sup_{n\geqslant 1}\sum_{i=1}^{n}\left(X_i-\kappa\right)<u\right)$$ and $u\in\mathbb{N}_0,\,\kappa\in\mathbb{N}$, and the random walk $\left\{\sum_{i=1}^{n}X_i,\,n\in\mathbb{N}\right\}$ consists of independent and identically distributed random variables $X_i$, which are non-negative and integer valued. We also give expressions of $\varphi(u)$ via the roots of certain polynomials. Based on the proven theoretical statements, we give several examples on $\varphi(u)$ and its generating function expressions, when random variables $X_i$ admit Bernoulli, Geometric and some other distributions.


Introduction and preliminaries
The study of sum of independent and identically distributed random variables (r.vs.) n i=1 X i is hardily avoidable in probability theory and related fields.This sequence of sums { n i=1 X i , n ∈ N} is called the random walk.Let us define the stochastic process where u ∈ N 0 := N ∪ {0}, κ ∈ N and random variables X i , i ∈ N are independent, identically distributed, non-negative and integer valued.The defined process (1) is called the generalized premium discrete time risk model, we abbreviate this naming by GP DT RM.Such type of processes appear in insurance mathematics arguing that they describe insurers wealth in time moments n ∈ N, where u means initial surplus (also called capital or reserve), κ denotes premium rate (earnings per unit of time), i.e. (n + 1)κ − nκ = κ, and the random walk { n i=1 X i , n ∈ N} represents expenses caused by random size claims.Then, it is curious to know whether initial surplus and gained premiums are sufficient to cover an incurred random expenses.More precisely, one aims to know whether W (n) > 0 for all n ∈ {1, 2, . . ., T } when T is some fixed natural number or T → ∞.The positivity of W (n) is of course associated to likelihood.For the GP DT RM given in (1)  ( Both ϕ(u, T ) and ϕ(u) are nothing but distribution functions of the provided integer valued sequence of sums of random variables; these functions are leftcontinuous, non-decreasing and step functions if we allow u ∈ R. Also, ϕ(∞) = 1 if EX < κ, see the next Section 2. Calculation of ϕ(u, T ) is simple.If X i , i ∈ N are independent copies of random variable (r.v.) X and x i := P(X = i), i ∈ N 0 , then ϕ(u, 1) = P(X u + κ − 1), ϕ(u, T ) = u+κ−1 i=1 ϕ(u + κ − i, T − 1)x i , T 2. see, for instance, [11, Theorem 1].
Let's turn to the ultimate time survival probability ϕ(u).The law of total probability and rearrangements in (2) imply see [11, page 3].By setting u = 0 in (3), we get what means that aiming to calculate ϕ(κ) when x 0 > 0, we must know know the initial ones ϕ(0), ϕ(1), . . ., ϕ(κ−1).Equally, requirement to know ϕ(0), ϕ(1), . . ., ϕ(κ − 1) remains actual calculating ϕ(u) for u = κ, κ + 1 . . .by recurrence (3).The needed quantity of these initial values is X distribution dependent as some of x 0 , x 1 , . . ., x κ−1 may vanish, c.f. (4) when P(X > j) = 1 for some j 0. The paper [11] deals with finding the mentioned initial values and it is shown there that they can be found calculating limits of a certain recurrent sequences.For instance, if κ = 2 and x 0 > 0, then it follows by (4) that where (see [9, page 2 and 3]) and Calculating the limits in (5) and aiming to prove that provided determinant 2 × 2 never vanishes, in paper [9] it was proved their connection to the solutions of s 2 = G X (s), where s ∈ C, |s| 1 and G X (s) is the probability generating function of r.v.X.On top of that, it was realized in [9] that the values of ϕ(0) and ϕ(1) in ( 5) can be derived by the classical stationarity property for the distribution of the maximum of a reflected random walk, see [6,Chapter VI,Section 9].Using the mentioned stationarity property, the generating function of ϕ(u + 1), u ∈ N 0 for κ = 2 was found in [9, Theorem 5], however there was required the finitiness of the second moment of r.v.X, i.e.EX 2 < ∞.In this article, we proceed the work [9] and find the generating function of ϕ(u + 1), u ∈ N 0 for arbitrary κ ∈ N.More over, we show that the requirement of EX 2 < ∞ is redundant and provide an exact expressions of ϕ(u), u ∈ N 0 via solutions of systems of linear equations which are based on the roots of s κ = G X (s) and Vandermonde-like matrices.
For the short overview of literature, we mention that references [1], [23], [8], [7], [21], [22], [5] are known as the classical ones on the wide subject of renewal risk models, while [19], [4] might be mentioned as the recent ones in nowadays.This work is also closely related to branching and Galton-Watson processes and queueing theory, see [15], [16], [14] and related papers.See also [3] or [2, Figure 1] on random walks occurrence in number theory.Last but not least, it is worth mentioning that Vandermonde matrices have a broad range of occurrence from pure mathematics to many other applied sciences, see [18] and related works.

Let
where x + = max{0, x}, x ∈ R is the positive part function and r.vs.X i and κ ∈ N are the same as in the model (1).Let us denote the local probabilities of r.v.M by Then, the ultimate time survival probability definition (2) implies that In general, the r.v.M can be extended, i.e.P(M = ∞) > 0, however the condition EX < κ ensures P(M < ∞) = 1.This is true due to Lemma 1].The condition EX < κ is called the net profit condition and it is crucial because the survival is impossible, i.e. ϕ(u) = 0 for all u ∈ N 0 , if EX κ, except few trivial cases when P(X = κ) = 1, see [11,Theorem 9].Intuitively, it is clear that long term survival by model ( 1) is impossible if the threatening claim amount X on average is equal or greater to the collected premium κ per unit of time.
For s ∈ C, let us denote the generating function of ϕ(1), ϕ(2), . . ., and the probability generating functions of r.vs.X and M Then, Ξ(s) and G M (s), for |s| < 1, satisfy the relation In many examples, the radius of convergence of G X (s) or G M (s) is larger than one.See [9,Lemma 8] for more properties of probability generating function in |s| 1.

Main results
In this section, based on the previously introduced notations and relation (1 − s)Ξ(s) = G M (s) in (7), we formulate the main results of the work.
Theorem 1. Let's consider the GPDTRM defined in (1) and suppose that the net profit condition EX < κ holds.Then, the local probabilities of random variables M and X satisfy the following two equalities: We prove Theorem 1 in Section 5. Equality (8) implies the following relation among the local probabilities π 0 , π 1 , . . .Corollary 2. Let π i = P(M = i), i ∈ N 0 and F X (u) = u i=0 x i , u ∈ N 0 be the distribution function of r.v.X.Then, for κ ∈ N, the following equalities hold: We explain the implication of Corollary 2 in Section 5.
(iv) Suppose the root α = 1 of s κ = G X (s) in |s| 1 is of multiplicity l ∈ {2, 3, . . ., κ − 1}, κ 3.Then, according to equality (8) in Theorem 1 and (ii), derivatives and, in order to avoid identical lines in matrix A, we can set up the modified system (12) by replacing its lines (except the last one) by the corresponding derivatives (13).If x 0 > 0, such a modified main matrix A remains nonsingular, see Lemma 8.
We further denote by |A| the determinant of the matrix A where M i, j , i, j ∈ {1, 2, . . ., κ}, κ ∈ N are its minors and the matrix A is the main matrix in (12) or its modification replacing the coefficients by derivatives as described in (iv).
The equality (11) and thoughts listed in (i)-(iv) allow to formulate the following statement.
and the matrix A is created as provided in (i)-(iv).More over, the initial values for recurrence (3), including ϕ(κ), are We prove Theorem 3 in Section 5.
We prove Theorem 4 in Section 5.

Lemmas
In this section we formulate and prove several auxiliary statements needed to derive the main results stated in Section 3.

Lemma 6. The random variable
where x + = max{0, x} is the positive part of x ∈ R, admits the following distribution property Proof.The proof is straight forward according to the definition of M and basic properties of maximum.Indeed, See also, [10, Lemma 5.2], [9, Lemma 25] and [6, page 198].
Lemma 7. Let α 1 , . . ., α κ−1 = 1 be the roots of multiplicity one of s κ = G X (s) in the region |s| 1 and suppose that the local probability x 0 is positive.Then, the determinant |A| of the main matrix in (12) is (α j − 1) We first put forward x 0 form the last column.Then, multiplying the last column by F X (κ − 1), F X (κ − 2), . . ., F X (1) respectively and subtracting it from the first, the second and etc. columns, we obtain .
Proceeding the similar with the penultimate column of the last determinant and so on and applying the basic determinant properties, we obtain that |A| equals to The last determinant is nothing but the well known Vandermonde determinant, see for example [12, Section 6.1].Thus, (α j − 1) because the roots α 1 , α 2 , . . ., α κ−1 are distinct and lie in the region |s| 1, s = 1.Note that by definition.
Proof.In short, the statement follows because derivative is the linear mapping.More precisely, if α 1 is of multiplicity two, let's say, then there exists such sufficiently close to zero δ ∈ R \ {0} that the matrix with the replaced second line is non-singular, see the expression of determinant in Lemma 7.Then, subtracting the second line from the first in (15), dividing the first line by δ afterwards and letting δ → 0, we get the desired line replacement by derivative.The proof is analogous for higher derivatives and/or more multiple roots.

Proofs of the main results
In this section we prove the statements formulated in Section 3. Let's start with the proof of Theorem 1.
Proof of Theorem 1.By Lemma 6 and the rule of total expectation To prove the second equality (9) in Theorem 1 we take s derivative of both sides of the derived equality ( 8) x j (κs κ−1 − (i + j)s i+j−1 ) =: S 3 .
Proof of Corollary 2. The n'th derivative of both sides of equality ( 8) and s → 0 gives (11) and division by 1 − s (see (ii) in Section 3) imply .

Proof of Theorem 4. Let us recall the matrix
Its determinant, according to Lemma 7, We now calculate the minors M κ, 1 , M κ, 2 , . . ., M κ, κ of A. Following the calculation of |A| in the proof of Lemma 7, we get Note that M κ, 1 is defined for κ 1 and M 1, 1 = 1 by agreement.The next one Similarly as before, M κ, 2 is defined for κ 2 only and M 2, 2 = x 0 +F X (1)α, where α ∈ [−1, 0) is the unique root of s 2 = G X (s), see explanation (i) in Section 3 and [9, Section 4 and Corollary 15 therein]. Proceeding, and so on until the last minor The statement on expressions of π0 , π1 , . . ., πκ−1 follows dividing the obtained minors with proper sings by determinant |A|.

Particular examples
In this section we give several examples illustrating the theoretical statements obtained in Section 3. Required numerical computations are performed by Wolfram Mathematica [13] Example 9. Suppose the random claim amount X is Bernoulli distributed, i.e. 1 − P(X = 0) = p = P(X = 1), 0 < p < 1.We find the ultimate time survival probability generating function Ξ(s) and calculate ϕ(u), u ∈ N 0 .
We start with an observation on the net profit condition Then, according to Theorem 1 and description (i) in Section 3, when 1/3 < p < 1 and, by Corollary 5 with κ = 2 and x 0 = p > 0, For κ = 2, u = 0 and 1/3 < p < 1 the recurrence (3) or Theorem 4 yields One may check that for p = 101/300 Let κ = 2 and assume that the net profit condition is satisfied EX < 2. We provide the ultimate time survival probability ϕ(u) formulas for all u ∈ N 0 .
Let us recall that The recurrence (3) and Corollary 5 for x 0 = 0 and x 1 > 0 implies which echoes and widens the statement of Theorem 3 in [11] providing another method of ϕ(u), u 2 calculation.
First, we observe that the net profit condition is satisfied EX = 199/101 < 3. We now follow the statement of Theorem 1 and surrounding comments beneath it.Then, for p = 101/300, the equation The provided values of ϕ(0), ϕ(1), ϕ(2) and ϕ(3) coincide with the ones given in [11, page 14], where they are obtained approximately from a certain recurrent sequences.
One may observe that the obtained result is expected, because u + 3n − n i=1 X i > 0 for all n ∈ N except when u = 0 and X i attains the value 3.