Continuous Tsallis and Renyi extropy with pharmaceutical market application

: In this paper, the Tsallis and Renyi extropy is presented as a continuous measure of information under the continuous distribution. Furthermore, the features and their connection to other information measures are introduced. Some stochastic comparisons and results on the order statistics and upper records are given. Moreover, some theorems about the maximum Tsallis and Renyi extropy are discussed. On the other hand, numerical results of the non-parametric estimation of Tsallis extropy are calculated for simulated and real data with application to time series model and its forecasting


Introduction
Supported by R, the continuous Shannon entropy (Shannon [14]) of the random variable (RV) X is given by where h(.) is the probability density function (pdf). Lad et al. [5] produced the extropy as a dual Shannon entropy measure. The extropy of the discrete RV X supported on Q = {x 1 , ..., x N } and with corresponding probability vector p = (p 1 , ..., p N ), is (1 − p i ) ln(1 − p i ). (1.2) Moreover, the view of the extropy of the continuous RV X supported on R has been introduced in many pieces of literature, see for example Raqab and Qiu [11] and Qiu [9], can be shown as follows: The literature has offered several entropy measures and their generalizations. Through the various uncertainty generalizations, Tsallis [15] presented the Tsallis entropy. The continuous Tsallis (C-Ts) entropy of the continuous RV X supported on R, 1 η > 0, is defined as follows: when η is 1, then lim η→1 T En η (X) = S H(X).
Renyi [12] suggested a model referred to as continuous Renyi (C-Re) entropy of order η of the continuous RV X with pdf h(x) as where 1 η > 0. It's simple to see that, when η → 1, REn η (X) tends to S H(X). The Tsallis and Renyi extropy under the discrete distribution have been presented in the literature. Xue and Deng [19] suggested the model Tsallis of extropy, the dual of Tsallis entropy function, and examined its maximum value. Besides, Balakrishnan et al. [2] study the Tsallis of extropy and apply it to pattern recognition. Liu and Xiao [6] introduced Renyi extropy and looked at the maximum value of it. Jawa et al. [4] discuss the past and residual of Tsallis and Renyi extropy via the softmax function.
This paper introduces the C-Ts and C-Re extropy under the continuous distribution lifetime. Moreover, presenting the maximum of both models. The remainder of this article is as follows: Section 2 discusses the C-Ts extropy model with its properties and their connection to other measures. Furthermore, examples of the models for different distributions are introduced. Section 3 gives the maximum C-Ts extropy and some properties depending on it. Section 4 provides the maximum CRe extropy. Finally, Section 5 ends the article with some non-parametric estimations of C-Ts extropy applied to simulated and real data and discusses the estimation for the forecasting time series of OECD pharmaceutical market data.

Continuous Tsallis extropy
In this section, we introduce the rendition of the C-Ts extropy based on the continuous distribution lifetime.
In the same manner, introduced in Lad et al. [5], we can present the extropy of the continuous RV X supported on R as follows: (2.1) In our work, we will deal with both Eqs (1.3) and (2.1) as a representative form of extropy. Inspired by the idea of discrete Tsallis of extropy, and the continuous distribution lifetime, we present the C-Ts extropy by the following definition. Definition 2.1. Let X be a continuous RV supported in [a, b], −∞ < a < b < ∞, having a pdf h(.). Before we introduce the concept of C-Ts extropy, we must mention that the value of the expression (1 − h(x)) η can be negative or non-negative according to the value of the pdf h(x) > 1 or h(x) ≤ 1, respectively. If h(x) ≤ 1, then (1 − h(x)) η gives real value for all 1 η > 0. If h(x) > 1, then (1 − h(x)) η gives real value when η ∈ Z + \{1}. Otherwise, it gives a complex result when η is a non-positive integer. Then, the C-Ts extropy can be given as where the conditions on η can be given in two cases: Proof. From (2.2), the C-Ts extropy can be rewritten as Provided that h(x) ≤ 1, when η > 1, the function z(y) = y η−1 is increasing, y > 0, therefore 1 − (1 − h(x)) η−1 ≥ 0. While, when 0 < η < 1, the function z(y) = y η−1 is decreasing, y > 0, therefore 1 − (1 − h(x)) η−1 ≤ 0. Then, the C-Ts extropy is non-negative.
Example 2.1. Assume that the continuous RV X has a continuous uniform distribution over [a, b], −∞ < a < b < ∞ symbolize by U(a, b) with pdf h(x) = 1 b−a . Then, from (2.2), the C-Ts extropy is given by In particular, the C-Ts extropy equals zero if b − a = 1.
Example 2.2. Consider that the continuous RV X has power function distribution with pdf given by Then, from (2.2), the C-Ts extropy is given by Figure 1 shows the C-Ts extropy of power function distribution with different values of θ and λ. Furthermore, we can see that when the difference between θ and λ increases, the C-Ts extropy increases. In view of Figure 1, we can see that all the given values of θ and λ of the power function distribution satisfy the condition h(x) ≤ 1 in Eq (2.2) and C-Ts extropy exist where 1 η > 0. For example, Figure 2 shows the plot of h(x) ≤ 1 when θ = 5 and 0 < λ ≤ 7. In contrast, Figure 2 shows that h(x) has the values h(x) ≤ 1 and h(x) > 1, for values like θ = 6 and 0 < λ ≤ 4. As a result, the value of C-Ts extropy will only exist under the conditions described in Definition 2.1. The next proposition discuss the C-Ts extropy when η tends to 1.
Remark 2.1. From Definition 2.1, when the parameter η = 2 is selected, then the C-Ts extropy is valid for h(x) ≤ 1 or h(x) > 1. Proof. From (2.2), at η = 2, we have Definition 2.2. (Shaked and Shanthikumar [13]) Provided that X and Y be RV's with pdf's h and g, cdf's H and G, respectively. In the dispersive order, it is said that X is smaller than Y, symbolized by Proof. From Definition 2.1 and Remark 2.1, at η = 2, we have Based on the independent and identically distributed observations (iid) X 1 , X 2 , ..., X n and Y 1 , Y 2 , ..., Y n . If X ≤ DIS Y, then we have (1) X i:n ≤ DIS Y i:n (see Theorem 3.B.26 in Shaked and Shanthikumar [13]), i = 1, 2, ..., n.
(2) P X n ≤ DIS P Y n (see Belzunce et al. [3]). Where X i:n and Y i:n , i = 1, 2, ..., n, are the ith order statistics of X 1 , X 2 , ..., X n and Y 1 , Y 2 , ..., Y n , respectively, and P X n and P Y n are the nth upper records of X and Y, respectively. Thus, we can conclude with the following results.
. The pdf of the jth order statistics X j:n in a sample of size n is where B( j, n − j + 1) is the beta function, H(.) = 1 − H(.) and H(.) is the cumulative distribution function (cdf). In the following example, based on U(a, b) distribution, we will obtain the C-Ts extropy of the jth order statistics X j:n as follows.
Thus, from (2.6), Definition 2.1 and Remark 2.1, the C-Ts extropy of the jth order statistics X j:n of the U(a, b) distribution is given by Based on the jth order statistics X j:n , we will obtain some significant results of C-Ts extropy when the choice of η = 2.
Proposition 2.6. Let X and Y be two continuous RV's with cdf's H and G, respectively. Moreover, X and Y supports in a dy exists, then, for a fixed j (1 ≤ j ≤ n), X and Y have a common distribution iff T Ex 2 (X j:n ) = T Ex 2 (Y j:n ).

The maximum C-Ts extropy
In this section, we will present the maximum C-Ts extropy by the following theorem.
Thus, from (2.2), X has the maximum C-Ts extropy iff it follows the continuous uniform distribution, We can obtain the maximization of T Ex η (X), using Lagrange multipliers method as follows: Differentiating L(X) with respect to h(x) then equating to zero, we obtain (3.2) To find the value of µ, we substitute (3.2) in the constrain (3.1), thus Then, from (1.4) and Definition 2.1, we have Proof. From (1.4) and Definition 2.1, we have Therefore, the Lagrange function (L(X)) is given by Then, the derivative with respect to h(x) is thus, we can note the vanishing equation and the rest of the proof will be in the same manner given in Balakrishnan et al. [2].

Theorem 3.2.
Provided that X is a continuous RV supported in [a, b], −∞ < a < b < ∞. Then, from Definition 2.1, The C-Ts extropy is less than or equal to 1.
Proof. We can see that the C-Ts extropy of the continuous uniform distribution increases to 1 as (b − a) increases. From (2.4), assume the function

then, its derivative is given by
its sign, by mean value theorem, is given by η(Z − 1 + ε) η−1 − η(Z − 1) η−1 , for some ε ∈ (0, 1). Therefore, we can see that T (Z) increases for η > 1 and decreases for 0 < η < 1. Moreover, as Z tends to ∞, we have the limit of uniform C-Ts extropy as follows: From the maximum C-Ts extropy given in Theorem 3.1, C-Ts extropy is less than or equal to 1. Or, we can implement the proof simply by using Bernoulli's inequality, as follows:

Continuous Renyi extropy
Inspired by the idea of the discrete Renyi extropy introduced by Liu and Xiao [6], we presented the C-Re extropy in this section. Let X be a continuous RV supported in [a, b], −∞ < a < b < ∞, having a pdf h(.). It is obvious from the logarithmic function that its domain is (o, ∞). Therefore, the C-Re extropy exists only when h(x) ≤ 1 and b − a > 1. Otherwise, it will return to a complex result or vanish. Then, the C-Re extropy, 1 η > 0, is given by where h(x) ≤ 1 and b − a > 1. Proof. From (4.1), with applying L Hôpital s rule, we get Example 4.1. Suppose that the continuous RV X has U(a, b) distribution, provided that b − a 1.
Then, the C-Re extropy is given by where b − a 1.

The maximum C-Re extropy
In this subsection, we will present the maximum C-Re extropy by the following theorem.
Thus, from (4.1), X has the maximum C-Re extropy iff it follows the continuous uniform distribution.
Proof. From (4.1), we have We can obtain the maximization of REx η (X), using Lagrange multipliers method as follows: Differentiating L(X) with respect to h(x) then equating to zero, we obtain (4.5) To find the value of µ, we substitute (4.5) in the constrain (4.4), thus (4.6) Substituting (4.6) in (4.5), it holds h(x) = 1 b−a is the pdf of the continuous U(a, b) distribution.

Non-parametric estimation
The non-parametric estimation is used in many works to estimate the extropy and its related measures. The non-parametric kernel density estimation is a common procedure used in many works of literature as a smoothed estimator, see, for example, Qiu and Jia [5], Noughabi and Jarrahiferiz [10] and Jahanshahi et al. [12]. In this section, we present the empirical estimator of the pdf to estimate the C-Ts extropy using the kernel non-parametric estimator. Let the sequence {X j , 1 ≤ j ≤ n} be a random sample drawn from a population with pdf h(.). From Definition 2.1, the empirical Tsallis extropy is defined as where X 1:n ≤ X 2:n ≤ ... ≤ X n:n is the order statistic of the random sample. Furthermore, h n (.) is the density kernel estimator of h(.) defined by (see, Parzen [8]) where kr(x) is the kernel function (we use the Gaussian kernel) and B is the bandwidths. To choose the bandwidths, we use different methods like plug-in selectors (includes rule-of-thumb B RT and direct plug-in B DPI ) and cross-validation selectors (includes unbiased cross-validation B UCV and biased crossvalidation B BCV ). Figure 4 shows the Gaussian kernel density estimator rule-of-thumb bandwidth (B RT −Gaussian ) compared with different bandwidths selection. Tables 1 and 2 show the Tsallis extropy estimator with different values of η and sample size n = 10, 20, 30, 70, 90, 100, 150, 200, and we can conclude the following: (1) For fixed η and n increases, Tsallis extropy decreases.

Pharmaceutical market dataset
In this subsection, we illustrate a dataset that compares sales and consumption across several countries in the pharmaceutical business. From Figures 5 and 6, this study focuses on the OECD countries which contain 8 countries in the pharmaceutical market variables (Antidepressants; Anxiolytics; Drugs used in diabetes; Respiratory system) from 2010 to 2021 (Defined daily dosage per 1000 inhabitants per day), see [7]. Table 3 shows the Tsallis extropy estimator with different values of η and we can conclude the following: (1) When η increases, Tsallis extropy decreases.
(2) The Tsallis extropy under the bandwidths B DPI gives a large value than the other bandwidths selections.   In this part, we study the forecasting time series of Austria pharmaceutical market from 2021 to 2030 for the two variables, anxiolytics and drugs used in diabetes. Then, we obtain the Tsallis extropy estimator of the obtained results. Figures 7 and 8 Tables 4 and 5 show the Tsallis extropy estimator of 80% and 95% forecasting interval of anxiolytics and drugs used in diabetes of Austria pharmaceutical market, respectively, and we can conclude the following: (1) When η increases, Tsallis extropy decreases.
(2) The Tsallis extropy under the bandwidths B RT gives a large value than the other bandwidths selections. Figure 9. Forecasting time series of Austria pharmaceutical market.

Conclusions
In this consideration, we have discussed the C-Ts and C-Re extropy under the continuous case, and discuss the conditions when the continuous distributions can be valid to apply in C-Ts and C-Re extropy. We have illustrated some properties of the presented models with examples of some distributions like uniform and power function distributions. Besides, our models with the other uncertainty measures and order statistics are compared. Moreover, we have discussed the condition of the maximum C-Ts and C-Re extropy, which both returned to the uniform distribution. A nonparametric estimation has been introduced of the Tsallis extropy and we see that its increases depend on the values of n, η and the selection of the bandwidth. In comparing C-Ts and C-Re extropy with the original version of entropy, we can see that no constraints are held on the pdf of the entropy measures. Moreover, we must have some restrictions on the pdf in C-Ts and C-Re extropy. Furthermore, when the Tsallis entropy parameter η approaches 1, it converges to the classical Shannon entropy. In contrast, the C-Ts extropy converges to the extropy measure when η tends to 1, only at h(x) ≤ 1. The choice of the non-extensive parameter η can significantly impact the behavior and interpretation of the entropy measure; therefore, when η = 2, the C-Ts extropy and entropy coincide, which means that the two models have the same performance in evaluating uncertain information. In future work, some relative works of entropy, e.g., Quantum X-entropy in generalized quantum evidence theory (Xiao [16]); On the maximum entropy negation of a complex-valued distribution (Xiao [17]); Evidential fuzzy multicriteria decision making based on belief entropy (Xiao [18]) can be implemented in extropy and its related measures.

Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.