Ergodic theorems for queuing systems with dependent inter-arrival times

We show that, under appropriate conditions, the waiting time in a queuing system converges to a stationary distribution as time tends to infinity, even in the case where inter-arrival times are dependent random variables. A convergence rate is given and a law of large numbers is established for functionals of the waiting time. These results provide tools for the statistical analysis of such systems, transcending the standard case where inter-arrival times are assumed independent.


Introduction
Let ℝ + ∶= { ∈ ℝ ∶ ≥ 0}. We consider a strongly stationary sequence ( , ) ∈ℕ ∈ ℝ 2 + . A single-server queuing model will be considered where customers are numbered by ∈ ℕ. The time between the arrival of customers + 1 * Both authors thank for the support of the "Lendület" grant LP 2015-6 of the Hungarian Academy of Sciences. and is described by the random variable +1 , for each ∈ ℕ. The service time for customer is given by the random variable , for ∈ ℕ.
The waiting time of customer satisfies the Lindley recursion where, for simplicity, we assume 0 ∶= 0 (we start with an empty queue).
The ergodic theory of general state space Markov chains (see e.g. [8]) allows to treat the case where ( , ) ∈ℕ are i.i.d. The independence assumption, however, seems too strong for applications and we wish to provide theoretical foundations for the statistical analysis of queuing systems also in cases where one of the two sequences is merely stationary.
The mathematics for such a setting is an order of magnitude more difficult as ( ) ∈ℕ fails to be a Markovian process. We will rely on the recent advances made by [7] in the theory of Markov chains in random environments. The case where both sequences are only stationary cannot be treated within this framework and requires further study. More complex (e.g. multiserver) queuing systems could be analysed along similar lines but we do not pursue such ramifications here.
As far as we know, ergodicity results in the general, stationary setting can only be found in [2] (in Russian), see also Example 14.1 on page 189 of [1]. Actually, Law( ) is known to converge to a limiting distribution under rather mild conditions. However, no convergence rate or law of large numbers is provided.
Also, the approach of [2] works only for unbounded 0 while we are able to treat the bounded case as well.
Throughout this paper we will be working on a probability space (Ω, , ).
For a Polish space , we denote by () its Borel sigma-algebra. We denote by [ ] the expectation of a real-valued random variable . For a -valued random variable we will denote by ( ) its law on on (). The set of probability measures on () is denoted by  1 (). The total variation metric on  1 () is defined by where | 1 − 2 | denotes the total variation of the signed measure 1 − 2 . We do not indicate the dependence of the metric TV on  since the latter will always be clear from the context. We now present our standing assumptions. In a stable system service times should be shorter on average than inter-arrival times. In our approach we also need that the service time sequence is independent of inter-arrival times. So we formulate the following hypothesis.
Remark 1.3. The notion above is inspired by the Gärtner-Ellis theorem, see [4], and it holds in a large class of models, well beyond the i.i.d. case. For instance, let = ( ) for a measurable ∶ ℝ → ℝ satisfying a suitable growth condition and let ( ) ∈ℕ be an ℝ -valued sufficiently regular Markov chain started from its invariant distribution. Then (2) holds true for all > 0, see Theorem 3.1 of [6] for a precise formulation.
We mention another example: let where , ∈ ℤ are independent and identically distributed ℝ-valued random variables with finite exponential moments of all orders and ∑ ∞ =−∞ | | < ∞. Then (2) is satisfied for this process by Theorem 2.1 of [3].
We now present a result on the ergodic behaviour of queuing systems with dependent service time which was obtained in Section 3 of [7]. Theorem 1.4. Let ( ) ∈ℕ be an i.i.d. sequence and let ( ) ∈ℕ be uniformly bounded and ergodic, satisfying a Gärtner-Ellis type condition. Let us assume that ( 0 > ) > 0 for all > 0. Then there exists a probability * on (ℝ + ) such that for some 1 , 2 > 0. Furthermore, for an arbitrary measurable and bounded Φ ∶ in probability.
In the present article we concentrate on the (arguably) more interesting case where service times are independent but inter-arrival times may well be dependent. Theorem 1.5. Let ( ) ∈ℕ be an i.i.d. sequence and let ( ) ∈ℕ be bounded and ergodic, satisfying a Gärtner-Ellis type condition. Let us assume that [ 0 0 ] < ∞ for some 0 > 0 and ( 0 ) has a density → ( ) (w.r.t. the Lebesgue measure) which is bounded away from 0 on compact subsets of ℝ + . Then the conclusions of Theorem 1.4 hold.
Remark 1.6. The mathematical setting of Theorem 1.5 is significantly more involved that that of Theorem 1.4. In Theorem 1.4, one may profit from the fact that, freezing the values of the process ( ) ∈ℕ , the waiting time becomes an inhomogeneous Markov chain with a particular state (the point 0) which is a reachable atom. In the proof of Theorem 1.5 one needs to guarantee ergodicity using a deeper coupling construction.
We can somewhat relax the boundedness condition on 0 in Theorem 1.5 above at the price of more stringent assumptions on 0 . Namely, we assume an exponential-like tail for 0 and for 0 a very light tail, like that of the Gumbel distribution at −∞. It will become clear from the proof that requiring a thinner tail for 0 (e.g. Gaussian) would necessitate even more stringent tail assumptions for 0 , hence we do not strive for further generality here. Then there exists a probability * on (ℝ + ) such that holds for some 1 , 2 , 3 > 0. Furthermore, for an arbitrary measurable and bounded Φ ∶ ℝ + → ℝ, (7) holds in probability. In Section 2 we recall the notion of Markov chains in random environments and certain results of [7]. Sections 3 and 4 contain the proofs of our two new theorems.

Markov chains in random environments
We interpret the process as a Markov chain in a random environment described by the process . Consistently with Definition 2.1, for ∈ , ( ) will refer to the action of the kernel ( , ⋅, ⋅) on . First, a Foster-Lyapunov type drift condition is formulated.
Assumption 2.2. Let ∶  → ℝ + be a measurable function. We consider measurable functions , ∶  → ℝ + with (⋅) ≥ 1. We assume that, for all ∈  and ∈ , Here ( ) ≥ 1 may well occur, but in the next assumption we require that the system dynamics, on long-time average, is contracting. Assumption 2.3. We assume that We stipulate the existence of suitable "small sets", which are familiar notions in Markov chain theory, see [8].
We also need to control the probability of ( 0 ) approaching 1. For the purposes of the present paper it is enough to require a simplification of Assumption 2.2 of [7].
We now recall results of [7]: with the above presented assumptions, the law of converges to a limiting law as → ∞, moreover, bounded functionals of the process show ergodic behavior provided that is ergodic.

Proof in the unbounded case
Throughout this section the assumptions of Theorem 1.7 are in force. We will use results of the previous section in the setting  =  = ℝ + ; ∶= , ∈ ℕ. We can easily extend the process on the negative time axis in such a way that , ∈ ℤ is stationary and ergodic. Define the parametrized kernel as follows: We now turn to the verification of Assumptions 2.2 and 2.3.
Choosing so small that 5 < 1 we get that Assumption 2.5 holds for small enough. The claimed convergence rate also follows from (12) and (6).

Proof in the bounded case
Let the assumptions of Theorem 1.5 be in force. Notice that Lemma 3.1 applies verbatim in this case, too.