Abstract

In economics, we know the law of demand: a higher price will lead to a lower quantity demanded. The question is to know how much lower the quantity demanded will be. Similarly, the law of supply shows that a higher price will lead to a higher quantity supplied. Another question is to know how much higher. To find answers to these questions which are critically important in the real world, we need the concept of elasticity. Elasticity is an economics concept that measures the responsiveness of one variable to changes in another variable. Elasticity is a function that can be built from an arbitrary function . Elasticity at a certain point is usually calculated as . Elasticity can be expressed in many forms. An interesting form, from an economic point of view, is the ratio between the derivative of the logarithm of the distribution function with respect to the logarithm of the point , which is developed in this article. The aim of this article is to study the direction of variation of this elasticity function and to construct a nonparametric estimator because the estimators that have been constructed so far are parametric estimators and admit many deficiencies in practice. And finally, we study the strong consistency of the said estimator. A numerical study was carried out to verify the adequacy of the theory.

1. Introduction

We know well that the distribution of a random variable can be defined by the distribution function, the probability density function, or the characteristic function. However, they are not the only functions to describe the distribution of a random variable. Thus, other functions exist which are also widely used to define a distinctive character of a random variable, in particular, the survival or reliability function, the odds function, the risk function, and the inverse risk function. As usual,is a continuous random variable representing the lifetime of a component, and its distribution is denoted by. We further assume that has a density function . We define (see Veres-Ferrer and Pavía [1]) : the reliability/survival function of at is ; the failure rate of at is ; the reversed hazard rate function of at is ; and the cumulative hazard rate function of at is . These aforementioned functions are introduced in the literature, are often found in the field of actuarial science (see Steffensen [2]), and are commonly used in survival analysis (see Lee and Wang [3]). The hazard rate (HR) plays a crucial role in reliability and survival analysis, as it defines the conditional probability of failure of an object in given that it did not fail before in . This property leads to many useful features, as for instance, the possibility of modeling the impact of an environment by a proportional hazard model. It is also well known that the hazard rate uniquely defines the distribution function of the time to failure random variable via the basic exponential formula. The reversed hazard rate (RHR) or reversed hazard function is a less intuitive function. It could be interpreted as the conditional probability of the state change happening in an infinitesimal interval preceding , given that the state change takes place at or before . In other words, the RHR is defined as the ratio of the probability density function and the corresponding distribution function, and thus in a reliability setting, it defines the conditional probability of a failure of an object in given that the failure had occurred in . The reversed hazard rate (RHR) can be treated as the instantaneous failure rate occurring immediately before the time point (the failure occurs just before the time point , given that the unit has not survived longer than time ). Recently, the properties of the reversed hazard rate (RHR) have attracted considerable interest of researchers (see Chandra and Roy [4] and Finkelstein [5]). Despite being a dual function to the hazard rate to a certain extent, its typical behavior makes it suitable for assessing waiting time, hidden failures, inactivity times, and the study of systems, including optimizing reliability and the probability of successful functioning.

The elasticity function of is also introduced in the literature (see Veres-Ferrer and Pavía [1]) and is defined as follows:

Elasticity is one of the most important concepts in economic theory. For example, in economics, if is the price of a commodity and denotes the demand on that commodity, then the “elasticity” of the demand is defined by (1). In other words, elasticity measures how sensitive an output variable is to changes in an input variable and is defined as the ratio of the percentage change in one variable to the percentage change in another variable. Lariviere and Porteus [6] adopt this concept and apply it to the supply chain management. The elasticity of a random distribution expresses the changes that the distribution function undergoes when faced with variations in the random variable, that is, how the accumulation of probability behaves throughout the domain of the variable. The elasticity function of a distribution shows the behavior of the accumulation of probability in the domain of the random variable. In this paper, in the first part, we make an in-depth study of the asymptotic behavior on the elasticity function with some simulations of this function in order to illustrate the variations of the said function. In practice, elasticity functions are not constant; even in economics, they vary as one moves along the demand curve and the only class where the elasticity is constant is the class where the demand function , where is the price and and are positive constants. Elasticity in econometrics makes it possible to estimate a production function, for example, of the nonlinear type of Cobb-Douglass (see Felipe and Adams [7]). The estimation of production functions is a very delicate exercise. Thus, the estimation by the parametric method of elasticity leads to an estimation of production functions with larger biases apart from the bias of measurement errors, the main ones being the bias of omitted variables, bias of specification, and bias of selection (see Griliches and Mairesse [8]). Most of the time, the data collected is incomplete data (often censored data). Consequently, to overcome these problems of bias induced on the estimation of the production function, we propose, in the second part or study, a nonparametric estimator based on the kernel estimation presence of incomplete data more precisely censored data not informative.

2. Behavior of Elasticity for Some Standard Distributions

We know that the functions HR and RHR are, respectively, defined by

By substituting HR in RHR, RHR can be expressed as

From the formula of elasticity (1), we note that

We deduce from (3) and (4) that the elasticity function can be written in the form

By differentiating expression (4) with respect to , we have

Using formula (3) of RHR, we can write which can be simply written in the form

We know that by definition, the elasticity function is defined for strictly positive random variables (as always in survival or reliability analysis); we will seek the direction of variation of the elasticity for most of the standard distributions used in survival or reliability analysis which are defined on a domain contained in .

In formula (8), it is clear that the sign of depends on that of the numerator expression. In the rest of this paragraph, we will focus on the numerator of equality (8), in order to be able to quickly deduce the sign of .

2.1. Uniform Distribution

If is a random variable such that , then we have for where

Note that the HR and the cumulative HR exist if and only if ; it means . So we have

From above results, we obtain and the numerator of equality (8) gives

Equation (12) shows that the elasticity is decreasing function for the uniform distribution.

2.2. Weibull Distribution

Special cases of the Weibull include the exponential and the Rayleigh distributions . If is a random variable such that , where and are shape and scale parameters, then we have :

Using the same formulas (6) and (5), we have or

Let . As before, the sign of depends on that of which verifies and We deduce that for the Weibull distribution, the elasticity function is a decreasing function.

2.3. Burr Distribution

In statistics and econometrics, the Burr distribution is a continuous probability law depending on three positive real parameters. It is commonly used to study household income. If is a random variable such that , then we have for and we deduce

Note that the Burr distribution generalizes certain distributions in probability theory. We have the following: (1)If , the Burr distribution is the generalized Pareto distribution(2)If , the Burr distribution is the log-logistic distribution(3)If , the Burr distribution is the Weibull distribution

Now, we obtain

With the numerator of equality (8), we obtain

The last equality shows that the sign of depends on

With and , we therefore deduce that the elasticity is a decreasing function for the Burr distribution.

2.4. Gamma Distribution

A random variable with parameters and (strictly positive) if its probability density function can be put in the form

So we get where

Additionally, we have

These last two equalities in formula (6) provide the expression for in the following form:

We observe that has the same sign as with and We therefore come to the conclusion that the elasticity is a decreasing function for the Gamma distribution. Also, remember the following: (1)If , the Gamma distribution is exponential distribution(2)If and , the Gamma distribution is a chi-square distribution with degree of freedom(3)If , the Gamma distribution is the Erlang distribution

2.5. Log-Normal Distribution

The log-normal distribution, also called the Galton distribution denoted as , is a distribution that is also widely used in reliability and survival analysis. It is defined for strictly positive random variables whose distribution function and probability density are defined by where is the Gaussian error function defined by .

So we have

To evaluate the indeterminate form of the last limit above, we then apply the Hospital rule which gives

We observe that the function decreases from infinity to 0. However, this observation is silent on the direction of variation, i.e., whether it decreases monotonically or not. Thus, we have to trust the derivative to see if it is less than , which means that it decreases monotonically according to the results of the limits.

Using formula (6), we can write

We notice in this last equality that if and only if

So the derivative of gives

Moreover, we have to calculate the limit at of two functions which compose the function , i.e.,

The limit at of gives an indeterminacy in the form . We know that the indeterminations of the form are reduced to an indeterminacy of the form or of the form by noting that a multiplication by is equivalent to a division by infinity or that a multiplication by infinity is equivalent to a division by . We can rewrite in the following form: to have the form in order to apply the Hospital rule. Note that

Applying the Hospital rule twice successively, we get

Equations (30), (31), and (34) prove that the elasticity is a monotonically decreasing function for the log-normal distribution.

Traditionally in economics or finance or in actuarial science, as the main quantities of interest are costs or durations, the most used probability laws are those with positive support. The four most common positive distributions are Gamma, Burr, log-normal, and Weibull. The theoretical curves in Figure 1 were produced using R and MATLAB software (for the Burr distribution, which is not defined in R). Moreover, we notice that whatever the different values of the parameters of each distribution, the elasticity function is a monotonically decreasing function.

Remark 1. In this paragraph, we were able to demonstrate that the elasticity function is a monotonically decreasing function, at least, for most of the standard distributions that we know. Thus until proof to the contrary, we estimate that the elasticity function, in view of the results obtained in this part, would be an always monotonic decreasing function as shown in Figure 1. The monotony of the elasticity has no influence on the construction of its estimator. However, the monotony facilitates the difficulties of simulations because there are fewer disturbances.

The following paragraph gives a nonparametric estimation based on the kernel method and a study on the almost sure convergence of the said estimator. The nonparametric kernel estimation approach, unlike the parametric approach, does not require any assumptions about the true probability law of the observations, and this is its main advantage. It is therefore a problem of functional estimation; for example, this implies that the elasticity function which is continuous will be estimated by a discontinuous function. Another such approach is that accuracy is better if one has a larger number of observations. It gives a better estimate with regard to the minimization of the biases and allows a very good smoothing for the estimator, and it contributes to the robustness of the estimator.

3. Statistical Inference on the Elasticity Function

We know that in most cases, the data collected is not complete, which leads us to carry out a study on incomplete data. There are several types of incomplete data including censored data. Moreover, the elasticity can depend on a covariate which can represent a given situation or a given state, for example, generally the case of the elasticity of the demand compared to the income (i.e., when the income increases, in most cases, the demand also increases); the covariate can represent the seasonal period or the geographical location, etc. In this part, we will define a nonparametric estimator of the conditional elasticity function in the case of the right censored data.

3.1. Theoretical Study

Consider pairs of independent random variables for that we assume drawn from the pair which is valued in . In this paragraph, we consider the problem of nonparametric estimation of the conditional density of given when the response variable is rightly censored. Consider a sequence of independent and identically distributed (i.i.d.) random variables with a common unknown conditional distribution function and density function . Furthermore, we denote by the censoring random variables which are supposed independent and identically distributed with a common unknown continuous distribution function , and its conditional version is noted . Thus, we construct our estimators by the observed variables , where and , where denotes the indicator function of the set . We assume that and are independent. The function , of the censoring random variables, is estimated by Kaplan and Meier [9] estimator defined as follows: where are the order statistics of and is the concomitant of . is known to be uniformly convergent to .

Let be the conditional distribution function of given and be the conditional subdistribution function of the uncensored observation given , and let be its corresponding conditional subdensity function. Furthermore, under the random censoring scheme, it is clear that the are i.i.d. with common conditional distribution function which satisfies and the uncensored model is the special case of the censored model with . Using (36) and independence condition of and conditionally on , the conditional cumulative hazard function of given is defined by

By using formula (37) and recalling that , we therefore define the conditional elasticity function in the case of censored data by the following relation:

Let be a sample of i.i.d observable r.v. ;and be kernels on[ and , respectively; and and be the sequences of positive nonincreasing real numbers which will be connected with the smoothing parameters of the estimators. Set for all , and , and . Then, a nonparametric Nadaraya-Watson type estimators of the conditional distribution function and conditional subdistribution function of the uncensored observation are given by where for , .

From formulas (39) and (40), we have Beran’s type estimator of the conditional cumulative hazard rate function given by

Likewise, the kernel estimate of the conditional density denoted as is defined by where

Note that this last estimator (43) has been recently used by Felipe and Adams [7]. We therefore define the estimator of the conditional elasticity function , from formulas (41), (43) and (38), in the form

In the continuation of this work, for any , let be its support’s right endpoint.

Choose , naturally ; , and therefore, depend on the covariate . We set ; is the support of the marginal density function and

Note that the formula of equality (46) is a centred random process which plays a very important role in the study of the almost sure convergence of the said estimator in our investigation.

For any given conditional function , and denote, respectively, its first and second derivatives (with respect to ) whenever all those derivatives exist. We need the following assumptions.

3.1.1. Assumptions

(1) The Model Assumptions. A1. The random variable takes values in a compact subset of , and the variables and are conditionally independent given .

A 2. The marginal density function of and its first and second derivatives exist and are uniformly continuous on and .

A 3. The joint density of is bounded and differentiable up to order 3 and

A 4. There exists a positive constant such that .

A 5. The conditional subdistribution functions and are of class , and their first and second partial derivatives are continuous in and are uniformly bounded.

A 6. The conditional cumulative hazard function is assumed to be strictly positive and there is a constant such that

(2) The Kernel Assumptions. K1. is a symmetric kernel of bounded variation on vanishing outside the interval for some satisfying (i)(ii)(iii)

K 2. is a probability density with compact support such that (i)(ii)

K 3. The function is a bounded measurable function.

(3) The Bandwidth Parameter Hypothesis. The bandwidth parameter is a sequence of positive nonincreasing real numbers satisfying the following:

H 1(i)(ii)

H 2(i) and (ii)(iii)(iv) and

H 3. .

Remark 2. The assumptions A1, A4-A5, K1-K2, and H1 are quite standard. A1-A2, A4-A5, K1-K2, and H1-H2 insure the strong uniform convergence of the estimators and to and , respectively, as in (39) and (40), while assumptions A2-A3, K1-K3, H1, and H3 ensure the strong uniform convergence of to; then, these two convergences lead to the strong consistency from to.

3.1.2. Strong Consistency

In this subsection, we prove the consistency of our estimator and give a rate of convergence. Our first result is the almost sure uniform convergence with an appropriate rate of the cumulative hazard function estimator stated in Proposition 4, which is the key for investigating the strong consistency of and the almost sure uniform convergence of given by Proposition 3. The second and last results deal with the strong consistency of the conditional elasticity function estimator given by Theorem 6.

Proposition 3. Under assumptions A1-A3, K1-K3, H1, and H3, we have

Proposition 4. For large enough and under assumptions A1, A4-A6, K1-K2, and H1-H2, we get

Remark 5. The two important propositions above lead to the following theorem which gives the convergence of the estimator (45).

Theorem 6. Under assumptions of the Proposition 3, Proposition 4, and assumption A6, we have

3.1.3. Auxiliary Lemmas and Proofs of Results

In this subsection, we state the main lemmas from which we obtained the results of the previous subsection.

Lemma 7. Let be any of the estimators or given in (39) and (40), respectively. Under assumptions A1-A2, A4-A5, K1-K2, and H1-H2, we get

Proof. The proof of this lemma parallels the proof of Lemma 8 of Bordes and Gneyou [10]. So, we omit it.

The following lemma gives the almost sure representation of the estimator (41) in decomposition form.

Lemma 8. Assume that the assumptions of the Lemma 7 are satisfied; we have where and is a sequence of the centred random process (46).

Proof. By definition, we have

It is obvious to see that (54) can be written in the form

By rewriting the expression inside the brackets of the last two integrals as it follows that the equality (54) becomes

It suffices to remark that and that the last two integrals represent and , respectively. This ends the proof of this lemma.

Proof of Proposition 3. The proof of this proposition is similar to that of Khardani et al. [11].

Proof of Proposition 4. From the proof of Lemma A.2 of Sun [12] and under the assumption of Lemma 8 and from its decomposition, we deduce that and from Lemma 7, we get also that

By integrating by parts of the equality (58) of Lemma 8, we arrive at

From Lemma 7 and from assumption A3, we obtain

Equalities (59), (60), and (62) conclude the proof of this last proposition.

Now, we start the last proof which is the proof of the main theorem.

Proof of Theorem 6. It is clear to see that where .

We now know that the exponential function is Lipschitzian function with , i.e.,. Therefore, from assumptions A4 and A6, we have

The Proposition 3 and Proposition 4 complete the proof of the Theorem 6.

3.2. Simulation Study

In this subsection, we investigate the performance of the elasticity function estimator based on the kernel method. For that, we use certain nonlinear models to see the effect of the models on the performance or on the speed of convergence. These models are defined and named by where the variable represents noise such that and are independent and identically distributed random variables. is the noise, and then, follow the normal distribution , i.e., . Since the random variable is supposed to be nonnegative, we choose such that is positive. Thus, each model corresponds to its . In our study, we chose with normal distribution for the model , with normal distribution for and with lognormal distribution for . Like the variable , the censoring variable is assumed to be nonnegative. Here, the random variable follows a lognormal distribution . Then, we considered the theoretical conditional elasticity function to be estimated, resulting from the lognormal distribution, i.e., the conditional probability density of given is that of the lognormal distribution with mean and standard deviation with , respectively, for , , and . The behavior of the estimator is evaluated over a several parameters, such as the sample size , the percentage of censoring controlled by . To compare the efficiency or the speed of convergence of the different models, we have fixed the percentage for all three models, and to compare the effect of censoring on the convergence and on the performance of the estimator, we have set two percentages and .

It is well known that, in the numerical study of nonparametric estimators, the kernel does not have enough influence on the quality or the convergence of the estimator. Thus, for our practical study, we have chosen the Gaussian kernel for the two kernels functions and . On the other hand, we know that the choice of the smoothing parameter has a very great influence on the performance of nonparametric estimators. Since our estimator depends on a single smoothing parameter , we do this: the smoothing parameter is selected using the empirical mean integrated squared error (MISE) approach on the compact , where and are the minimum and the maximum of the observations, respectively. Explicitly, the MISE is given by the formula: for a given bandwidth and for fixed, and with are the estimators on the sample. Thus, a scan on the value of allows to determine the optimal value for the bandwidth which minimize the empirical MISE (66). Considering a sequence of tuning parameter with and generating several value of , we determine the optimal value of the smoothing parameter by

With MATLAB, we were able to determine the optimal parameter h ( for , for , and for ) and we calculated the residual mean squared error (RMSE) to compare the speed of convergence of the three models studied and also to compare the effect of censorship rates. The residual mean square error is given by where is a sequence of random observations from the chosen models. The numerical results obtained are grouped in Tables 1 and 2 as well as their corresponding graphical representations (Figures 2 and 3).

4. Conclusion

In this article, we have shown that the elasticity function is monotonically decreasing and we have constructed a kernel estimator. The objective of this article is to show that the elasticity function is monotonic for distributions with support in . Then, try to build a nonparametric estimator, to study its almost sure convergence or its strong consistency, and finally to make a numerical study in order to see the adequacy with the theory. Numerically, we notice that the model has a higher convergence speed than the others. In addition, we note that when the censorship rate is higher, its impact on convergence is very visible when the sample size is less than 100. Remember that the lack of a calculator led us not to test the sample sizes greater than 100 such as 150, 200, and 500 because with a size , the computation time with our machine is 44,743 seconds. It should also be noted that with the objectives being to find the direction of variation of the elasticity function and to construct an estimator, in our next article, we will study the asymptotic normality of this estimator to deduce a theoretical and numerical study on the central limit theorem (CLT).

Data Availability

The numerical data simulated using MATLAB and R software with three models (parabolic, exponential, or logarithm) used to support the conclusions of this study are not real data. These data are included in the main file of the article, more precisely the Simulation Study paragraph of the article.

Conflicts of Interest

The author declares that he has no conflicts of interest.