1 Introduction

An inhomogeneous gamma process (IGP) was defined by Berman [5] in the following manner. Consider a Poisson process with intensity function \(\lambda (t)\). Suppose that an event occurs at the origin, and that thereafter only every \(\kappa\)th event of the Poisson process is observed. Then, if \(T_{1},\ldots ,T_{n}\) are the times of the first n events observed after the origin, their joint density is the following

$$\begin{aligned} f_{n}(t_{1},\ldots ,t_{n}) =\Big \{ \prod _{i=1}^{n}\lambda (t_{i})[\varLambda (t_{i})-\varLambda (t_{i-1})]^{\kappa -1} \Big \}\exp [-\varLambda (t_{n})]/[\varGamma (\kappa )]^{n}, \end{aligned}$$
(1)

where

$$\begin{aligned} \varLambda (t)=\int _{0}^{t}\lambda (u)\mathrm{d}u \end{aligned}$$

and \(t_{0}=0.\) If \(\kappa\) is any positive number, not necessarily an integer, then (1) is still a joint density function. An interpretation of the real positive value of \(\kappa\) one can give basing on the form of the conditional intensity function

$$\begin{aligned} \gamma (t)=\lambda (t)z( \varLambda (t)-\varLambda (T_{N(t^{-})})), \end{aligned}$$
(2)

where \(\{N(t),t\ge 0\}\) is the corresponding counting process and z is the hazard function of the gamma distribution \(\mathcal{G}(\kappa ,1)\) with unit scale parameter and shape parameter \(\kappa .\) The hazard function z(t) of the \({{\mathcal {G}}}(\kappa ,1)\) distribution with \(\kappa <1\) decreases to 1 when t tends to infinity. If \(\kappa >1\), then the function z(t) increases to 1 when t tends to infinity. Therefore, if the events are thought of as shock, then a value of \(\kappa > 1\) indicates that the system is in better condition just after a repair than just before a failure and the larger \(\kappa\) is, the larger the improvement will be. A value of \(\kappa < 1\) indicates that the system is in worse condition just after a repair than just before a failure. When \(\kappa = 1,\) the IGP reduces to the nonhomogeneous Poisson process. From the above interpretation, one can see that the IGP is an important process especially in the cases when the assumption of minimal repair which characterizes inhomogeneous Poisson process models is violated. Point and interval estimation of the parameter \(\kappa\) is thus an important task from a practical point of view because allows us to detect whether a system is in better or worse condition after a repair than just before a failure.

A point process \(\{T_{i},i=1,2,\ldots ,n\}\) with the joint density (1) for all positive integers n and all real positive \(\kappa\) is called the IGP with rate function \(\lambda (t)\) and shape parameter \(\kappa .\)

An alternative method of deriving the IGP is the following one. Suppose that the random variables

$$\begin{aligned} X_{i}:=\varLambda (T_{i})-\varLambda (T_{i-1}), \end{aligned}$$

\(i=1,\ldots ,n,\) are independently and identically distributed according to the gamma \({{\mathcal {G}}}(\kappa ,1)\) distribution. It then follows that (1) is the joint distribution of \(T_{1},\ldots ,T_{n}.\)

The IGP is a compromise between the renewal process (RP) and the nonhomogeneous Poisson process (NHPP), since its failure probability at a given time t depends both on the age t of the system and on the distance of t from the last failure time. Thus, it seems to be quite realistic model in many practical situations. The IGP for which \(\lim _{t\rightarrow \infty }\varLambda (t)=\infty\) can be regarded as a special case of a trend renewal process (TRP) introduced and investigated first by Lindqvist [18] and by Lindqvist et al. [20] (see also [11, 19, 21, 22]). The class of the TRP’s is a rich family of processes and was considered in the field of reliability [21], finance [34, 35], medicine [25], hydrology [9], software engineering [10, 29], and to forecasts of volcanoes eruption [4]. The IGP, as a special case of the TRP, can be therefore a relevant model of recurrent events in the above mentioned fields.

In this paper, we consider the maximum likelihood (ML) estimation of the parameters of the IGP with the log-linear intensity function

$$\begin{aligned} \lambda (t)=\varrho \exp (\beta t), \end{aligned}$$
(3)

where \(\varrho >0\) and \(\beta >0.\) For \(\beta >0\), the times \(W_{i}:=T_{i}-T_{i-1},\) \(i=1,\ldots n,\) where \(T_{0}=0,\) between events tend to get smaller, and the larger \(\beta\) is, the larger this trend will be. To give an interpretation of the parameter \(\varrho\), let us notice that the intensity function (3) can be written in the following form

$$\begin{aligned} \lambda (t)=\exp (\beta t+\ln \varrho )=\exp \{\beta [t+(\ln \varrho )/\beta ]\}. \end{aligned}$$

When \(\varrho <1\) (and \(\beta >0\)), the summand \((\ln \varrho )/\beta\) is less than 0, so the intensity \(\lambda\) increases slowly in the initial phase (for \(t\in (0,-(\ln \varrho )/\beta )\)). When \(\varrho >1,\) the summand \((\ln \varrho )/\beta\) is greater than 0, so the intensity function \(\lambda\) increases very fast from the beginning. Therefore, the summand \((\ln \varrho )/\beta\) can be viewed as the parameter of time translation, and for \(\varrho <1\) we have a translation to the left, for \(\varrho >1\)–to the right.

The IGP with the rate function (3) will be denoted by IGPL\((\varrho ,\beta ,\kappa ).\) Statistical inference for the IGP was considered by Berman [5] and for modulated Poisson process (a special case of IGP) by Cox [6]. Both papers only seriously addressed questions of hypothesis testing (via the likelihood ratio test), but did not satisfactorily solve the problem of parameter estimation. Inferential and testing procedures for log-linear nonhomogeneous Poisson process (a special case of the IGP considered in this paper) can be found in Ascher and Feingold [1], Lewis [17], MacLean [23], Cox and Lewis [7], Lawless [16] and Kutoyants [14, 15]. In the paper of Bandyopadhyaya and Sen [3], the large-sample properties of the ML estimators of the parameters of IGP with power-law form of the intensity function are studied.

The article is organized as follows. In Sect. 2, the log-likelihood equations for the IGPL\((\varrho ,\beta ,\kappa )\) are derived. In Sect. 3, asymptotic properties of ML estimators of the unknown parameters are given. As in the case of IGP with power-law intensity, considered by Bandyopadhyay and Sen [3], in the IGP with log-linear intensity the Hessian matrix of the log-likelihood function converges in probability to a singular matrix. Therefore, to prove the asymptotic normality of ML estimators in the model considered, we used an analogous method as in the paper of Bandyopadhyay and Sen [3]. In Sect. 4, we present the results of simulation study concerning the behaviour of the ML estimators of the model parameters in finite samples. We also illustrate the differences in behaviour of the ML estimator of the parameter \(\rho\) compared to ML estimators of \(\beta\) and \(\kappa .\) The asymptotic distribution of the ML estimators, derived in Sect. 3, we apply to obtain the realizations of the pointwise asymptotic confidence intervals for the unknown parameters of IGPL model in the real data analysis contained in Sect. 5. Section 6 contains conclusions and some prospects. Proofs of all theorems formulated in this paper are given in Sect. 7.

2 The ML Estimation in the IGPL Model

Let us notice that for the IGPL\((\varrho ,\beta ,\kappa )\)

$$\begin{aligned} \varLambda (t)= \left\{ \begin{array}{ll} \frac{\varrho }{\beta }\Big [\exp (\beta t)-1\Big ]&{} \mathrm{for}\ \ \beta \ne 0,\\ \varrho t &{} \mathrm{for}\ \ \beta =0, \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \lim _{t\rightarrow \infty }\varLambda (t)= \left\{ \begin{array}{ll} -\frac{\varrho }{\beta }&{} \mathrm{for}\ \ \beta <0,\\ \infty &{} \mathrm{for} \ \ \beta \ge 0. \end{array} \right. \end{aligned}$$

Therefore, in contrast to the IGP with power-law intensity considered by Bandyopadhyay and Sen [3], the IGPL can be used to model reliability growth with bounded unknown number of failures. For example in the case \(\beta <0\) and \(\kappa =1\), the IGPL\((\varrho ,\beta ,\kappa )\) is known as the Goel–Okumoto software reliability model (see [8]). Maximum likelihood estimation for the class of parametric nonhomogeneous Poisson processes (NHPP’s) software reliability models with bounded mean value functions, which contains the Goel–Okumoto model as a special case, was considered by Zhao and Xie [33]. They showed that the ML estimators need not be consistent or asymptotically normal. They also derived asymptotic distribution for a specific NHPP model which is called the k-stage Erlangian NHPP software reliability model (see [13]) (for \(k=1\) this model is IGPL\((\rho ,\beta ,1)\) with \(\beta <0\)). Nayak et al. [24] extended the inconsistency results of Zhao and Xie [33] for all estimators of the unknown number of failures (not just the MLE), and for all NHPP models with bounded mean value functions.

From the above-mentioned results, one can see that properties of the ML estimators of IGPL parameters can depend on an assumed model (with decreasing or increasing rate function) and should be considered separately. We will consider the IGPL\((\varrho ,\beta ,\kappa )\) for which \(\varrho>0,\beta>0,\kappa >0\). We suppose that the IGPL\((\varrho ,\beta ,\kappa )\) is observed up to the nth event (failure) appears for the first time, and the values \(t_{1},\ldots ,t_{n}\) of the jump times \(T_{1},\ldots ,T_{n}\) are recorded. In other words, we consider the so-called failure truncation (or inverse sequential) procedure. It should be noted that the failure truncation procedure cannot be applied to IGPL\((\rho ,\beta ,\kappa )\) with \(\beta <0.\) Denote \({\mathbf {t}}=(t_{1},\ldots ,t_{n})\). The likelihood function of the IGPL\((\varrho ,\beta ,\kappa ),\) observed until the nth failure occurs, is

$$\begin{aligned} \mathcal {L}_{n}(\varrho ,\beta ,\kappa ;{\mathbf {t}}) =&\left[ \frac{\varrho ^{\kappa }}{\varGamma (\kappa )}\right] ^{n} \exp \Big (\beta \sum _{i=1}^{n}t_{i}\Big )\exp \Big [-\varrho \int _{0}^{t_{n}} \exp (\beta x)\mathrm{d}x\Big ] \\&\cdot \prod _{i=1}^{n}\left[ \int _{t_{i-1}}^{t_{i}}\exp (\beta x)\mathrm{d}x\right] ^{\kappa -1}\\ =&\left[ {\varrho ^{\kappa }\over \varGamma (\kappa )\beta ^{\kappa -1}}\right] ^{n} \exp \Big [\beta \sum _{i=1}^{n}t_{i}-{\varrho \over \beta } \left( \exp (\beta t_{n})-1\right) \Big ]\\&\cdot \prod _{i=1}^{n}\left[ \exp (\beta t_{i})-\exp (\beta t_{i-1})\right] ^{\kappa -1}. \end{aligned}$$

The log-likelihood function of the IGPL\((\varrho ,\beta ,\kappa )\) is of the following form

$$\begin{aligned} \ell _{n}(\varrho ,\beta ,\kappa ;{\mathbf {t}}) =&n\log \left[ {\varrho ^{\kappa }\over \varGamma (\kappa )\beta ^{\kappa -1}}\right] +\beta S_{n}({\mathbf {t}})-{\varrho \over \beta }\left[ \exp (\beta t_{n})-1\right] \\&+(\kappa -1)V_{n}(\beta ;{\mathbf {t}}), \end{aligned}$$

where

$$\begin{aligned} S_{n}({\mathbf {t}}) =&\sum _{i=1}^{n}t_{i}, \end{aligned}$$
(4)
$$\begin{aligned} V_{n}(\beta ;{\mathbf {t}}) =&\sum _{i=1}^{n}\log \left[ \exp (\beta t_{i})-\exp (\beta t_{i-1})\right] . \end{aligned}$$
(5)

Therefore, the possible MLE’s of IGPL\((\varrho ,\beta ,\kappa )\) parameters are solutions to the following system of the log-likelihood equations

$$\begin{aligned} {\partial \ell _{n}(\varrho ,\beta ,\kappa ;{\mathbf {t}})\over \partial \varrho }= & {} {n\kappa \over \varrho }- {1\over \beta }\left[ \exp (\beta t_{n})-1\right] =0,\nonumber \\ {\partial \ell _{n}(\varrho ,\beta ,\kappa ;{\mathbf {t}})\over \partial \beta }= & {} -{n(\kappa -1)\over \beta } +{\varrho \over \beta ^{2}}\big [(1-\beta t_{n})\exp (\beta t_{n})-1\big ] \nonumber \\&+S_{n}({\mathbf {t}})+(\kappa -1)W_{n}(\beta ;{\mathbf {t}})=0, \end{aligned}$$
(6)
$$\begin{aligned} {\partial \ell _{n}(\varrho ,\beta ,\kappa ;{\mathbf {t}})\over \partial \kappa }= & {} n\log \varrho -n\varPsi (\kappa )-n\log \beta +V_{n}(\beta ;{\mathbf {t}})=0, \end{aligned}$$
(7)

where

$$\begin{aligned} W_{n}(\beta ;{\mathbf {t}}) =&\sum _{i=1}^{n}\frac{t_{i}\exp (\beta t_{i}) -t_{i-1}\exp (\beta t_{i-1})}{\exp (\beta t_{i})-\exp (\beta t_{i-1})}, \end{aligned}$$
(8)

and \(\varPsi (\kappa )\) denotes the digamma function.

Remark 1

The system of likelihood equations given above not always has a solution \(({{\hat{\rho }}},{{\hat{\beta }}},{\hat{\kappa }})\in (0,\infty )^{3}\) (see [12]). However, for some realizations of the IGPL, it has more than one solution.

3 Asymptotic Properties of ML Estimators

From now on, we denote vector of process parameters by \(\vartheta = (\varrho , \beta , \kappa )'\), and \(\vartheta _0 = (\varrho _0, \beta _0, \kappa _0)'\) indicates the true parameters values. We will use standard symbols \(o_P(\cdot )\) and \(O_P(\cdot )\) for convergence and boundedness in probability, respectively. All limits mentioned in this section will be taken as \(n \rightarrow \infty\) unless mentioned otherwise.

Denote \(A_{n}(\vartheta )=-\partial ^{2}\ell _{n}(\theta , {\mathbf {T}})/\partial \vartheta \partial \vartheta '.\) Then, \(A_{n}(\vartheta )=(a_{ij}(\vartheta )), i,j=1,2,3,\) where

$$\begin{aligned} a_{11}(\vartheta )&= \displaystyle {-\frac{\partial ^2 \ell _{n}(\vartheta ,{\mathbf {T}})}{\partial \varrho ^2}} = \displaystyle {\frac{n\kappa }{\varrho ^2}}, \\ a_{12}(\vartheta )&= \displaystyle {-\frac{\partial ^2 \ell _{n}(\vartheta ,{\mathbf {T}})}{\partial \varrho \partial \beta }} = \displaystyle {\frac{1}{\beta }T_n\exp (\beta T_n) - \frac{1}{\beta ^2}[\exp (\beta T_n) - 1]} = a_{21}(\vartheta ), \\ a_{13}(\vartheta )&= \displaystyle {-\frac{\partial ^2 \ell _{n}(\vartheta ,{\mathbf {T}})}{\partial \varrho \partial \kappa }} = -\displaystyle {\frac{n}{\varrho }} = a_{31}(\vartheta ), \\ a_{22}(\vartheta )&= \displaystyle {-\frac{\partial ^2 \ell _{n}(\vartheta ,{\mathbf {T}})}{\partial \beta ^2}} = -\displaystyle {\frac{n(\kappa - 1)}{\beta ^2}} + \displaystyle {\frac{2\varrho }{\beta ^3}[\exp (\beta T_n) - 1]} -\displaystyle {\frac{2\varrho }{\beta ^2}T_n\exp (\beta T_n)} \\&\quad + \displaystyle {\frac{\varrho }{\beta }T_n^2\exp (\beta T_n)} + (\kappa - 1)\displaystyle {\sum _{i=1}^n \frac{\exp [\beta (T_{i}+T_{i-1})](T_i - T_{i-1})^2}{[\exp (\beta T_i) - \exp (\beta T_{i-1})]^2}}, \\ a_{23}(\vartheta )&= \displaystyle {-\frac{\partial ^2 \ell _{n}(\vartheta ,{\mathbf {T}})}{\partial \beta \partial \kappa }} = \displaystyle {\frac{n}{\beta }} - \sum _{i=1}^n \displaystyle {\frac{T_i\exp (\beta T_i) - T_{i-1}\exp (\beta T_{i-1})}{\exp (\beta T_i) - \exp (\beta T_{i-1})}} = a_{32}(\vartheta ), \\ a_{33}(\vartheta )&= \displaystyle {-\frac{\partial ^2 \ell _{n}(\vartheta ,{\mathbf {T}})}{\partial \kappa ^2}} = n\psi '(\kappa ). \end{aligned}$$

Now, define the scaled matrix

$$\begin{aligned} C_{n}(\vartheta ) = (c_{ij}(\vartheta )) := n^{-1} \begin{bmatrix} a_{11}(\vartheta ) &{} \displaystyle {\frac{a_{12}(\vartheta )}{\log n}} &{} a_{13}(\vartheta ) \\ \displaystyle {\frac{a_{21}(\vartheta )}{\log n}} &{} \displaystyle {\frac{a_{22}(\vartheta )}{\log ^2 n}} &{} \displaystyle {\frac{a_{23}(\vartheta )}{\log n}} \\ a_{31}(\vartheta ) &{} \displaystyle {\frac{a_{32}(\vartheta )}{\log n}} &{} a_{33}(\vartheta ) \end{bmatrix}, \end{aligned}$$

obtained from the matrix \(A_{n}(\vartheta ).\)

Theorem 1

The matrix \(C_{n}(\vartheta _0)\) converges in probability to

$$\begin{aligned} \varSigma _C = \begin{bmatrix} \displaystyle {\frac{\kappa _0}{\varrho _0^2}} &{} \displaystyle {\frac{\kappa _0}{\beta _0\varrho _0}} &{} -\displaystyle {\frac{1}{\varrho _0}} \\ \displaystyle {\frac{\kappa _0}{\beta _0\varrho _0}} &{} \displaystyle {\frac{\kappa _0}{\beta _0^2}} &{} -\displaystyle {\frac{1}{\beta _0}} \\ -\displaystyle {\frac{1}{\varrho _0}} &{} -\displaystyle {\frac{1}{\beta _0}} &{} \psi '(\kappa _0) \end{bmatrix}. \end{aligned}$$

Note that \(\varSigma _C\) is a singular matrix with rank 2. Therefore, we cannot use standard methods (see for example [27, 31, 32]) to prove asymptotic normality of the ML estimators in the model considered. We proceed by reducing the problem to two-dimensions and appealing to the distributional properties of the IGPL. The parameter \(\vartheta\) will be partitioned as follows \(\vartheta ' = (\varrho , \beta , \kappa )=(\varrho , \theta ').\) Substituting \(\varrho\) by \(n\kappa \beta [\exp (\beta t_{n})-1]^{-1}\) into (6) and (7), the score functions \(\displaystyle {\frac{\partial \ell _{n}(\varrho ,\beta ,\kappa ;{\mathbf {t}})}{\partial \beta }}\) and \(\displaystyle {\frac{\partial \ell _{n}(\varrho ,\beta ,\kappa ;{\mathbf {t}})}{\partial \kappa }}\) reduce to \(\ell _{n}^*(\theta ) = (\ell _{1n}^*(\theta ), \ell _{2n}^*(\theta ))'\), where

$$\begin{aligned} \ell _{1n}^*(\theta )&= \frac{n}{\beta } + \sum _{i=1}^{n}t_{i} -\displaystyle {\frac{n\kappa t_n\exp (\beta t_n)}{\exp (\beta t_n) - 1}} \\&\quad +(\kappa - 1)\displaystyle {\sum _{i=1}^n \frac{t_i\exp (\beta t_i) - t_{i-1}\exp (\beta t_{i-1})}{\exp (\beta t_i) - \exp (\beta t_{i-1})}}, \\ \ell _{2n}^*(\theta )&= n\log \displaystyle {\frac{n\kappa }{\exp (\beta t_n) - 1}} - n\psi (\kappa ) + \sum _{i=1}^n \log [\exp (\beta t_i) - \exp (\beta t_{i-1})]. \end{aligned}$$

Denote

$$\begin{aligned} M_{n}(\theta _0)= \{ \theta =(\beta ,\kappa )': \beta = \beta _0 + \tau _1 n^{-\delta }, \quad \kappa = \kappa _0 + \tau _2 n^{-\delta }, ||\tau || \le h \}, \end{aligned}$$
(9)

where \(\delta\) and h are fixed numbers, \(0<\delta <\frac{1}{2}\), \(0<h<\infty .\)

Theorem 2

With probability tending to 1 as \(n \rightarrow \infty ,\) there exists a sequence of roots \(\hat{\theta }_{n} = (\hat{\beta }_{n}, \hat{\kappa }_{n})\in M_{n}(\theta _{0})\) of the equations \(\ell _{1n}^*(\theta )=0\) and \(\ell _{2n}^*(\theta )=0\).

Denote by \({\mathbf {Z}}_{n} = (Z_{1n}, Z_{2n}, Z_{3n})'\), where

$$\begin{aligned} Z_{1n}= & {} n^{\frac{1}{2}}(\log n)^{-1}(\hat{\varrho }_{n}-\varrho _{0}),\nonumber \\ Z_{2n}= & {} n^{\frac{1}{2}}(\hat{\beta }_{n}-\beta _{0}), \end{aligned}$$
(10)
$$\begin{aligned} Z_{3n}= & {} n^{\frac{1}{2}}(\hat{\kappa }_{n}-\kappa _{0}) \end{aligned}$$
(11)

are the centered and scaled \(\hat{\varrho }_{n},\) \(\hat{\beta }_{n},\) \(\hat{\kappa }_{n},\) where

$$\begin{aligned} \hat{\varrho }_{n}=n\hat{\kappa }_{n}\hat{\beta }_{n}[\exp (\hat{\beta }_{n} t_{n})-1]^{-1}, \end{aligned}$$
(12)

and \(\hat{\beta }_{n}, \hat{\kappa }_{n}\) are given in Theorem 2.

Theorem 3

Vector \({{\mathbf {Z}}}_{n}\) is asymptotically (singular) normal with mean vector zero and covariance matrix

$$\begin{aligned} \varSigma _{{\mathbf {Z}}} = \begin{bmatrix} \displaystyle {\frac{\varrho _{0}^2}{\kappa _{0}}} &{} \displaystyle {\frac{\varrho _{0}\beta _{0}}{\kappa _{0}}} &{} 0 \\ \displaystyle {\frac{\varrho _{0}\beta _{0}}{\kappa _{0}}} &{} \displaystyle {\frac{\beta _{0}^2}{\kappa _{0}}} &{} 0 \\ 0 &{} 0 &{} \displaystyle {\frac{\kappa _{0}}{\kappa _{0}\psi '(\kappa _{0}) - 1}} \end{bmatrix}. \end{aligned}$$

Corollary 1

Theorem 3 can be applied to construct pointwise asymptotic confidence intervals for the parameters of the IGPL.

Remark 2

As in the case of the MPLP considered by Bandyopadhyay and Sen [3], the asymptotic result of Theorem 3 provides some curious insights into the behaviour of the MLE’s of the IGPL parameters. Apart from the singularity and non-uniform scalings of the MLE’e, we have also that the estimator \({{\hat{\kappa }}}\) is asymptotically independent of the estimators \({{\hat{\varrho }}}\) and \({{\hat{\beta }}}.\)

Remark 3

The asymptotic result for the ML estimators of the parameters of nonhomogeneous Poisson process with log-linear intensity, namely, the process IGPL\((\varrho ,\beta ,1)\), can be obtained by substituting \(\kappa _{0}=1\) in the top \(2\times 2\) top left submatrix of \(\varSigma _\mathbf{Z}.\)

4 Simulation Study

In this section, we report a simulation study of the finite sample performance of the ML estimators of the IGPL model parameters. For each selected combination of \((\varrho _0, \beta _0, \kappa _0)\) and n number of events, where \(n \in \{25,50,75,100\}\), Monte Carlo simulations with 1000 replications were performed. The IGPL\((\varrho ,\beta ,\kappa )\) realizations \((t_{1},\ldots ,t_{n})\) were generated according to the formula

$$\begin{aligned} t_i = \frac{1}{\beta _{0}} \log \Big (\exp (\beta _{0} t_{i-1}) +\frac{\beta _{0}}{\varrho _{0}} G(\kappa _{0})\Big ), \end{aligned}$$

where \(G(\kappa _{0})\) is a random value generated from gamma \(\mathcal{G}(\kappa _{0},1)\) distribution and \(t_0 = 0\). For each realization \((t_{1},\ldots ,t_{n})\), the MLE of \(\beta _{0}\) was calculated as a solution to the following equation

$$\begin{aligned} \log [n\kappa (\beta ;{\mathbf {t}})]-\varPsi [\kappa (\beta ;{\mathbf {t}})]- \log [\exp (\beta t_{n})-1]+{1\over n}V_{n}(\beta ;{\mathbf {t}})=0, \end{aligned}$$
(13)

where

$$\begin{aligned} \kappa (\beta ;{\mathbf {t}})={W_{n}(\beta ;{\mathbf {t}})-S_{n}({\mathbf {t}}) -{\displaystyle {n\over \beta }}\over W_{n}(\beta ;{\mathbf {t}})-nt_{n}{\displaystyle {\exp (\beta t_{n})\over \exp (\beta t_{n})-1}}}, \end{aligned}$$

where \(S_{n}({\mathbf {t}})\) and \(W_{n}(\beta ;{\mathbf {t}})\) are given by (4) and (8), respectively. A solution to Eq. (13) was obtained using Newton–Raphson method, implemented in nleqslv function inside R package nleqslv, with the initial value

$$\begin{aligned} \beta ^{\mathrm{start}}=\frac{n}{\sum _{i=1}^n (t_n - t_i)}. \end{aligned}$$

The initial value \(\beta ^{\mathrm{start}}\) was taken on the basis of an analogous reasoning to the construction of simple estimators, presented in the paper of Bandyopadhyay and Sen [3].

Estimates of \(\kappa\) and \(\varrho\) were determined by the formulas

$$\begin{aligned} \hat{\kappa }=\kappa (\hat{\beta };{\mathbf {t}}) \end{aligned}$$

and

$$\begin{aligned} \hat{\varrho }=\frac{n\hat{\beta }\hat{\kappa }}{\exp (\hat{\beta } t_{n})-1}, \end{aligned}$$

respectively.

Performance of the estimates is investigated in terms of bias (Bias) and mean square error (MSE), given by following formulas

$$\begin{aligned} \text {Bias}({\hat{\vartheta }})=E({\hat{\vartheta }}) - \vartheta , \ \ \ \text {MSE}({\hat{\vartheta }}) = E({\hat{\vartheta }} - \vartheta )^2. \end{aligned}$$

In Tables 1 and 2, we present the empirical biases and MSE’s for \(\rho =1.5\) and \(\rho =0.5\), respectively, for simulated data sets. We have taken values \(\rho =1.5\) and \(\rho =0.5\) to consider the case of the intensity function which increases very fast from the beginning (\(\rho >1\) implies time translation to the right) and the case of the intensity function which increases slower in the initial phase (\(\rho<1\) implies time translation to the left). From the results collected in these tables, we conclude that:

  • the empirical biases and MSE’s of the estimators \({{\hat{\beta }}}\) and \({{\hat{\kappa }}}\) are not so big even for \(n=25\), but the estimator \({{\hat{\rho }}}\) has rather big empirical MSE’s, especially for small n;

  • the empirical MSE’s of the estimator \(\rho\) decrease a much slower rate than the empirical MSE’s of the estimator \({{\hat{\beta }}}\) and \({{\hat{\kappa }}}\) as n increases;

  • the empirical MSE‘s of the estimator \({{\hat{\kappa }}}\) for \(n=100\) are almost the same for various values of \(\beta\) and \(\rho\) when the value of \(\kappa\) is fixed.

Table 1 Simulation results for asymptotic behaviour of ML estimators (\(\rho _0 = 1.5\))
Table 2 Simulation results for asymptotic behaviour of ML estimators (\(\rho _0 = 0.5\))

5 Application to Some Real Data Set

In this section, we apply Theorem 3 to obtain realizations of pointwise asymptotic confidence intervals for the parameters of the IGPL model fitted to a real data set.

For each data set, we calculated the Akaike information criterion (AIC) and Bayesian information criterion (BIC) for five special cases of inhomogeneous gamma process model: the power-law process (PLP), the modulated power-law process (MPLP) considered by Bandyopadyay and Sen [3], the gamma renewal process (GRP), the nonhomogeneous Poisson process with log-linear intensity function (NHPPL), and the IGPL considered in this paper.

5.1 Diesel Engine

We consider the failure times (in thousands) in operating hours to unscheduled maintenance actions for the USS Halfbeak No.3 main propulsion diesel engine (see [2]). The data were considered by Rigdon [28], where the author assumed the power-law process model and obtained the ML estimates of the parameters. We assumed that the system was observed until the 71st failure at 25518 h.

The values of AIC and BIC are given in Table 3.

Table 3 The values of AIC for diesel engine failure data

The smallest values of the AIC and BIC are for the IGPL model, and therefore, the IGPL model is the best within the class of models considered, regardless of criterion. It is better than the PLP model considered by Rigdon [28].

The estimates (point and interval) of the IGPL parameters are given in Table 4. Realizations of 95% pointwise asymptotic confidence intervals are obtained using Theorem 3. In Table 4, the bootstrap confidence limits are also given for comparison.

Table 4 Estimates and 95% confidence intervals from the diesel engine failure data

The estimated value of \(\kappa\) is less than 1, what indicates that the system is in worse condition just after a repair than just before a failure.

5.2 Air Conditioning

As the second example, we consider the successive failures of the air conditioning system of Boeing 720 jet airplanes nr 7912, presented in work of Proschan [26]. The system was observed till 30th failure at 1788 hours. For numerical reasons, we consider event times in hundreds of hours. The values of AIC and BIC are given in Table 5.

Table 5 The values of AIC for air conditioning failure data

According to AIC and BIC, the most appropriate model for air conditioning failure process is NHPPL. Let us notice that NHPPL is a special case of IGPL process with \(\kappa =1\).

The estimates, pointwise asymptotic confidence intervals and bootstrap confidence limits of the IGPL parameters are given in Table 6.

Table 6 Estimates and 95% confidence intervals from the air conditioning failure data

It can be observed that both (asymptotic and bootstrap) realizations of the confidence intervals for \(\kappa\) include 1, what suggests the correctness of the model choice based on the previously considered criteria.

6 Concluding Remarks

Asymptotic properties of ML estimators of the unknown parameter of the IGPL model were given. As in the case of IGP with power-law intensity, considered by Bandyopadhyay and Sen [3], in the IGPL the Hessian matrix of the log-likelihood function converges in probability to a singular matrix. Therefore, to prove the asymptotic normality of ML estimators in the model under study, a non-standard method has been applied. Moreover, the ML estimator enjoys the curious property that the covariance matrix of the asymptotic distribution is singular. The consistency of ML estimators in the IGPL model, as well as in the modulated power-law process considered by Bandyopadhyay and Sen [3], remains as the open problem.