Types of Statistical Indicators Characterized by 2-Pre-Hilbert Spaces

: In this article, we established new results related to a 2-pre-Hilbert space. Among these results we will mention the Cauchy-Schwarz inequality. We show several applications related to some statistical indicators as average, variance and standard deviation and correlation coefﬁcient, using the standard 2-inner product and some of its properties. We also present a brief characterization of a linear regression model for the random variables in discrete case.


Introduction
In Reference [1] Gähler introduced the definitions of a linear 2-normed space and a 2-metric space. In References [2,3], Diminnie, Gähler and White studied the properties of a 2-inner product space.
Several results related to the theory of 2-inner product spaces can be found in Reference [4]. In Reference [5] Dragomir et al. show the corresponding version of Boas-Bellman inequality in 2-inner product spaces and in Reference [6] the superadditivity and the monotony of 2-norms generated by inner products was studied.
We consider X a linear space of dimension greater than 1 over the field K, where K is the set of the real or the complex numbers. Suppose that (·, ·|·) is a K-valued function defined on X × X × X satisfying the following conditions: (c) (u, v|w) = (v, u|w); (d) (αu, v|w) = α(u, v|w), for any scalar α ∈ K; (e) (u 1 + u 2 , v|w) = (u 1 , v|w) + (u 2 , v|w).
A function · | · defined on X × X and satisfying the above conditions is called 2-norm on X and (X, · | · ) is called linear 2-normed space.
It is easy to see that if X = (X, (·, ·|·)) is a 2-inner product space over the field of real numbers R or the field of complex numbers C, then (X, · | · ) is a linear 2-normed space and the 2-norm · | · is generated by the 2-inner product (·, ·|·).
Two consequences of the above properties are given by the following: the parallelogram law [4], for all u, v, w ∈ X and the Cauchy-Schwarz inequality (see e.g., References [4,7]), for all u, v, w ∈ X. The equality in (3) holds if and only if u, v and w are linearly dependent. If X = (X, ·, · ) is an inner product space, inequality (3) becomes ( [6,8]): A reverse of the Cauchy-Schwarz inequality in 2-inner product spaces can be found in Reference [5]: if u, v, w ∈ X and a, A ∈ K are such that Re(Av − u, u − av|w) ≥ 0 or equivalently Constant 1 4 is the best possible. Another important inequality in a 2-inner product space X is the triangle inequality [4], for all u, v, w ∈ X.
The Cauchy-Schwarz inequality in the real case, | u, v | ≤ u · v (see e.g., References [9,10]), can be obtained by the following identity, as in Reference [11], for all u, v ∈ X, u, v = 0. An inequality, which is an improvement of the Cauchy-Schwarz inequality, is the Ostrowski inequality. In Reference [12], we find some refinements of Ostrowski's inequality and an extention to a 2-inner product space. The purpose of this paper is to study some identities in a 2-pre-Hilbert space and we prove new results related to several inequalities in a 2-pre-Hilbert space. We will mention the Cauchy-Schwarz inequality. The novelty of this article is the introduction, for the first time, of the concepts of average, variance, covariance and standard deviation and of the correlation coefficient for vectors, using the standard 2-inner product and some of its properties. We also present a brief characterization of a linear regression model for the random variables in discrete case.

Inequalities in a 2-Pre-Hilbert Space
In this section, we will obtain some characterizations of the Cauchy-Schwarz inequality for a 2-pre-Hilbert space. First, we use an identity, which is given by the following result: Lemma 1. If X = (X, (·, ·|·)) is a 2-inner product space over the field of complex numbers C, and the 2-norm · | · is generated by the 2-inner product (·, ·|·)), then we have for vectors x,y and z in X and a,b ∈ C.
Proof. By making simple calculations, for all x, y, z ∈ X and a, b ∈ C, we have that which proved the statement. (8) we take a = 1 x|z and b = 1 y|z , then we obtain

Remark 1. If in relation
Re(x, y|z) = x|z y|z 1 − 1 2 for all nonzero vectors x and y in X and the linearly independent pairs of vectors (x, z) and (y, z). If (x, y|z) = (x, y|z), then we obtain the following relation: The above equality is the extension of equality (7) to a 2-inner product space.
Using Lemma 1 in two conveniently chosen ways we get an important equality, thus: Theorem 1. With the above assumptions in a 2-pre-Hilbert space, the following equality holds ax − by|z 2 + bx + ay|z 2 = |a| 2 + |b| 2 x|z 2 + y|z 2 , for all vectors x,y and z in X and a,b ∈ C with ab ∈ R.
By adding the above relation with relation (8), we obtain the relation of the statement.
Another equality in 2-inner product spaces is given by the following: Theorem 2. If X = (X, (·, ·|·)) is a 2-inner product space over the field of complex numbers C, and the 2-norm · | · is generated by the 2-inner product (·, ·|·), then we have for all vectors x,y,z ∈ X and a, b ∈ C.
Proof. If we make the substitutions a → a and b → b, in the equality form Lemma 1, then we find the relation ax − by|z 2 = |a| 2 x|z 2 − 2Re ab(x, y|z) + |b| 2 y|z 2 .
We can rewrite relation (12) as: If we apply Lemma 1, then we obtain the relation Similarly, we deduce the identity By these relations and using the identity we deduce relation (14). Therefore, the relation of the statement is true.

Remark 2.
It is easy to see that relation (14) becomes Corollary 1. With the above assumptions in a 2-pre-Hilbert space, the following identity holds for all nonzero vectors x,y and z in X and the linearly independent pairs of vectors (x,z) and (y,z) and a,b ∈ C.
Proof. If we make the substitutions x → x x|z and y → y y|z in relation (12), then we deduce equality (16).

Corollary 2.
With the above assumptions in a 2-pre-Hilbert space, the following equalities hold for vectors x and y in X and a,b ∈ R, and for nonzero vectors x,y and z in X and the linearly independent pairs of vectors (x,z) and (y,z).
Proof. In relation (12), if we take a, b ∈ R, then we have a = a, b = b, so, we obtain relation (17). For a = 1 x|z and b = 1 y|z , in relation (17), we deduce equality (18).

Remark 3.
We can rearrange the expression from relation (18) as follows: But, using the inequality min{a, b} ≤ √ ab ≤ max{a, b} for positive real numbers a and b, we deduce the following inequality for nonzero vectors x,y and z in X and the linearly independent pairs of vectors (x, z) and (y, z). This inequality is an extension of Maligranda's inequality from Reference [13] to a 2-inner product space over the field of real numbers.
Next, we give an evaluation of the sum of the squares of the norms of two vectors, in a 2-inner product space: Theorem 3. If a,b ∈ C with Re(ab) > 0 and X = (X, (·, ·|·)) is a 2-inner product space over the field of complex numbers C, and the 2-norm · | · is generated by the 2-inner product (·, ·|·), then we have for all nonzero vectors x,y and z in X and the linearly independent pairs of vectors (x,z) and (y,z).
Proof. From relation (16) and from the parallelogram identity, we find the equality Using the equality, and from triangle inequality, we prove the relation 0 ≤ x x|z + y y|z |z ≤ 2 and taking into account that Re(ab) > 0, we deduce inequality (20).
Bellow, we obtain a refinement of the Cauchy-Schwarz inequality and a reverse inequality of the Cauchy-Schwarz inequality in a 2-pre-Hilbert space. Corollary 3. With the above assumptions in a 2-pre-Hilbert space, the following inequality holds for all nonzero vectors x,y and z in X and the linearly independent pairs of vectors (x,z) and (y,z).
Proof. If we take in inequality (20) a = b = 0, a, b ∈ R, then we find the following inequality From relation (9), we deduce the equality and combining this with inequality (23), we find the inequality of the statement.

Remark 4.
If we take x|z = y|z = 1 in inequality (22), we obtain the following inequality for all x, y ∈ X, where z ∈ X is a given nonzero vector.
Next, we will show an estimate of the triangle inequality in a linear 2-normed space.
Theorem 4. If X = (X, · | · ) is a linear 2-normed space over the field of real numbers R, then the following inequality holds for all vectors x, y, z in X and a, b ∈ R + .
Proof. If without reducing the generality, we assume that 0 ≤ a ≤ b, then we have Similarly, we make the following calculations: In the case 0 ≤ b ≤ a, we deduce the same above results. Therefore, the inequalities of the statement are true.
Remark 5. If we replace x by x x|z and y by y y|z in relation (25), we obtain the following inequality: for nonzero vectors x, y, z in X and the linearly independent sets of vectors {x, z} and {y, z}.

Corollary 4.
If X = (X, · | · ) is a linear 2-normed space over the field of real numbers R, then we have for nonzero vectors x, y, z in X and the linearly independent sets of vectors {x, z} and {y, z}.
Proof. For nonzero vectors x, y, z in X and the linearly independent sets of vectors {x, z} and {y, z}, in Theorem 4, we make the following substitutions: a = 1 x|z , b = 1 y|z , then we obtain which implies the inequalities from (27).

Applications of the Standard 2-Inner Product
If X = (X, ·, · ) is an inner product space, then the standard 2-inner product (·, ·|·) is defined on for all x, y, z ∈ X. But, (X, · | · ) becomes a linear 2-normed space, with the 2-norm given by the following: for all x, z ∈ X.
(c) Let X be a real linear space with the inner product ·, · . The Chebyshev functional [14] is defined by for all x, y ∈ X, where z ∈ X is a given nonzero vector.
It is easy to see that we have T z (x, y) = (x, y|z) and T z (x, x) ≥ 0, for all x, y, z ∈ X.
If we replace x and y by x − x,z z 2 z and y − y,z z 2 z, z = 0, in the Cauchy-Schwarz inequality, then we find the Cauchy-Schwarz inequality in terms of the Chebyshev functional, given by: Let X be a real linear space with the inner product ·, · . Equality (17) can be written in terms of the Chebyshev functional by (b − a)(aT z (x, x) − bT z (y, y)) + T z (ax − by, ax − by) = abT z (x − y, x − y), for all vectors x, y in X, where z ∈ X is a given nonzero vector and a, b ∈ R. If T z (x, x) = T z (y, y), then T z (ax − by, ax − by) ≥ abT z (x − y, x − y), for all vectors x, y in X, where z ∈ X, z = 0.
(d) For every subspace U ⊂ X, we have the decomposition X = U U ⊥ . Every x ∈ X can be uniquely written as x = x 1 + u, where x 1 ∈ U and u ∈ U ⊥ . We define the orthogonal projection P U : X → X by P U (x) = x 1 . It is easy to see that u, x 1 = 0, for every u ∈ U ⊥ , so we have P U (x) , u = 0, which involves the equality x, u = u, u = u 2 , where the norm · is generated by the inner product ·, · .
From relation (17), if x|z = y|z , then we have for vectors x and y in X and a, b ∈ R + . For a subspace U of an inner product space X with x, y ∈ X, z / ∈ U and (x, y|z) is the standard 2-inner product on X, we deduce the identity: where we have the decompositions x = P U (x) + u, y = P U (y) + v. Using the equality from (17) and above identity, we proved the following equality: for x|z = y|z .

Applications of the Standard 2-Inner Product to Certain Statistical Indicators
A variety of ways to present data, probability, and statistical estimation are mainly characterized by the following statistical indicators-mean (average), variance and standard deviation as well as covariance and Pearson correlation coefficient [15].
Taking the mean as the center of a random variable's probability distribution, the variance is a measure of how much the probability mass is spread out around this center.
If V is a random variable with mean E[V] = µ V , then the formal definition of variance is the following: The expression for the variance can thus be expanded: The covariance is a measure of how much two random variables V and W change together at the same time and is defined as . We find the inequality of Cauchy-Schwarz for discrete random variables given by The correlation between sets of data is a measure of how well they are related. A correlation coefficient is a numerical measure of some type of correlation, meaning a statistical relationship between two variables.
The Pearson correlation coefficient (r(V, W)) is a measure of the strength and direction of the linear relationship between two variables V and W that is defined as the covariance of the variables divided by the product of their standard deviations: Using the inequality of Cauchy-Schwarz, we deduce that −1 ≤ r(V, W) ≤ 1. The variance of a discrete random variable V = (x i , p i ) 1≤i≤n with probabilities P(V = x i ) = p i = 1 n for any i = 1, n is its second central moment, the expected value of the squared deviation from mean ., x n be real numbers, assume γ 1 ≤ x i ≤ Γ 1 for all i = 1, n and the average µ V = 1 n n ∑ i=1 x i . In 1935, Popoviciu (see e.g., References [16,17]) proved the following inequality The discrete version of Grüss inequality has the following form (see e.g., References [18,19]): where x i , y i are real numbers so that γ 1 ≤ x i ≤ Γ 1 and γ 2 ≤ y i ≤ Γ 2 for all i = 1, n.

From the relation
and using the inequality of Cauchy-Schwarz for discrete random variables given by |Cov(V, W)| ≤ Var(V)Var(W), and inequality (32), we obtain a proof of Grüss's inequality. Bhatia and Davis show in Reference [16] the following inequality: The inequality of Bhatia and Davis represents an improvement of Popoviciu's inequality, . Therefore, we will first have an improvement of Grüss's inequality given by the following relation: In Reference [18], we find some research on refining the Grüss inequality.
The Pearson correlation coefficient is given by Florea and Niculescu in Reference [20] treated the problem of estimating the deviation of the values of a function from its mean value. The estimation of the deviation of a function from its mean value is characterized below.
We denote by R([a, b]) the space of Riemann-integrable functions on the interval [a, b], and by C 0 ([a, b]) the space of real-valued continuous functions on the interval [a, b].
The integral arithmetic mean for a Riemann-integrable function f : [a, b] → R is the number Riemann-integrable function f . If function f is a Riemann-integrable function, we denote by the variance of f . The expression for the variance of f can be expanded in this way: var( f ) = In the same way, we defined the h-variance of a Riemann-integrable The expression for the h-variance can be thus expanded: It is easy to see another form of the h-variance, given by the following: [21], Aldaz showed a refinement of the AM-GM inequality and used it in the proof 1/2 is a measure of the dispersion of f 1/2 about its mean value, which is, in fact, comparable to the variance.
The covariance is a measure of how much two Riemann-integrable functions change together at the same time and is defined as and is equivalent to the form In fact, the covariance is the Chebyshev functional attached to functions f and g. In Reference [22] it is written as T( f , g). The properties of the Chebyshev functional have been studied by Elezović, Marangunić and Pečarić in Reference [19].
The h-covariance is a measure of how much two random variables change together and is defined as In Reference [23], Pečarić used the generalization of the Chebyshev functional notion attached to functions f and g to the Chebyshev h-functional attached to functions f and g defined by T( f , g; h).
Here, Pečarić showed some generalizations of the inequality of Grüss by the Chebyshev h-functional. It is easy to see that, in terms of covariance, this can be written as T( f , g; h) = cov h ( f , g).
In terms of covariance, the inequality of Grüss becomes In terms of Chebyshev functional, the inequality of Gruss becomes Next, using the notion of the standard 2-inner product, we extend the above concepts to vectors of R n . If X = (X, ·, · ) is an inner product space, then the standard 2-inner product (·, ·|·) is defined on X by: for all x, y, z ∈ X. But, (X, · | · ) becomes a linear 2-normed space, with the 2-norm given by the following: for all x, z ∈ X. Now, we take the vector space (R n , ·, · ). For x = (x 1 , x 2 , ..., x n ), y = (y 1 , y 2 , ..., y n ), z = (z 1 , z 2 , ..., z n ), we have x, y = x 1 y 1 + x 2 y 2 + ... + x n y n , In Reference [14], Niezgoda studied certain orthoprojectors. The operator P z : X → X defined by is the orthoprojector from X onto span{z}. If e = u u , where u = (1, 1, ..., 1) ∈ R n , then the average of vector x is µ x = x u , e = 1 n ∑ n i=1 x i , and we have Therefore, in (R n , ·, · ), we define the variance of a vector x by The standard deviation σ(x) of x ∈ R n is defined by σ(x) := var(x), so we deduce that σ(x) = x u |e = 1 u x − P u (x) . Since, using the standard 2-inner product, we have it is easy to define the covariance of two vectors x and y by cov(x, y) := x u , y u |e .
The correlation coefficient (r(x, y)) of two vectors x and y can be defined by: Another definition of variance and covariance for vectors from R n can be made using projection. Vector projection is an important operation in the Gram-Schmidt orthonormalization of vector space bases.
The projection of a vector x onto a vector y is given by P y (x) = x, y y y y = x,y y 2 y. If in (R n , ·, · ), we have the vector u = (1, 1, ..., 1), then We remark that the variance of a vector x is given by var(x) = 1 u 2 x − P u x 2 and the covariance of two vectors x and y is given by cov(x, y) = 1 u 2 x − P u (x), y − P u (y) . Next, we can write some equalities and inequalities, using several results from Section 2, related to variance, covariance and the standard deviation of vectors x, y ∈ X. Therefore, from relations (8), (10), (11), (15), (18)-(20), (22), (25), we obtain the following relations: for all x, y ∈ X, with σ(x), σ(y) = 0, a, b ∈ R and min{a, b}(σ(x) + σ(y) − σ(x + y)) ≤ aσ(x) + bσ(y) − σ(ax + by) ≤ max{a, b}(σ(x) + σ(y) − σ(x + y)), for all x, y ∈ X and a, b ∈ R + . If we take the vector space (C 0 [a, b], ·, · ), then for f , g, h ∈ C 0 [a, b], we have where a, b ∈ R. By dividing by u = 0, we deduce the relation a x u + be = w u , where e = u u , so e = 1. Therefore, we obtain a = ( w u , x u |e) x u |e and b = w u , e − a x u , e . If X = R n , then a = cov(w,x) var (x) and b = µ w − aµ x . In statistics, linear regression is a linear approach to modelling the relationship between a dependent variable and one or more independent variables. The case of one independent variable is called simple linear regression.
We consider two random variables: V = (x i , 1 n ) 1≤i≤n , W = (y i , 1 n ) 1≤i≤n with probabilities P(V = x i ) = 1 n , P(W = y i ) = 1 n , for any i = 1, n. A linear regression model assumes that the relationship between the dependent variable W and the independent variable V is linear. Thus, the general linear model for one independent variable may be written as W = aV + b. We can describe the underlying relationship between y i and x i involving this error term i by i = y i − ax i − b.
If we have S(a, b) = ∑ n i=1 2 i = ∑ n i=1 (y i − ax i − b) 2 , then we find min a,b∈R S(a, b). Using the Lagrange method of multipliers, we obtain a ∑ n i=1 x i + nb = ∑ n i=1 y i and a ∑ n i=1 x 2 i + b ∑ n i=1 x i = ∑ n i=1 x i y i . By simple calculations, we deduce a = Cov(V,W)

Var(V)
and b = E (W) − aE (V) , so, we obtain the same coefficients as above.
Funding: This research received no external funding.