Symmetric weights for OWA operators prioritizing intermediate values. The EVR-OWA operator

One of the most widely adopted approaches to deﬁne weights for Ordered Weighting Averaging (OWA) operators consists of using biparametric linear increasing fuzzy linguistic quantiﬁers. However, several shortcomings appear when using these quantiﬁers because depending on the values of these parameters, the aggregations could be biased or the extreme values might be completely ignored. In this contribution, the use of Extreme Values Reductions (EVRs) as fuzzy linguistic quantiﬁers is proposed to deﬁne weights for OWA operators in order to provide more realistic aggregations. First, the impact of the parameters of these linear fuzzy linguistic quantiﬁers in the OWA aggregations is studied. After that, EVR-OWA operators are introduced as those OWA operators whose weights are computed by using an EVR as fuzzy linguistic quantiﬁer. It will be shown that when using EVR-OWA operators to fuse information, the aggregations are non-biased, take into account more information and the intermediate values are prioritized before the extreme ones. After proposing several families of EVRs, the generalising potential of the EVR-OWA operators is shown by proving that every family of symmetric weights for OWA operators that prioritize the intermediate information are the weights obtained from a certain EVR. Finally, an illustrative example is provided. (cid:1) 2021 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).


Introduction
Several real world problems demand the fusion of information or expert knowledge which might be fuzzy or imprecise. For instance, according to [13], Group Decision Making (GDM) problems require of an aggregation phase which combines the experts' preferences to obtain a collective opinion before carrying the exploitation phase out, in which the ranking of the alternatives to select the best one as solution of the decision problem is established. Even though lots of aggregation operators have been proposed in the literature [1,2], one of the most widely used is the Ordered Weighted Averaging (OWA) operator, which assigns weights to the input values according to their order [18,20,21]. In order to compute these weights, among other proposals [7,11,17], it is common to use the method proposed by Yager [21], which is based on the use of a biparametric family of linear fuzzy linguistic quantifiers [20] which assigns zero to the values that are close to zero and one to the values that are close to one.
This approach is simple and effective but it presents important drawbacks regarding the selection of necessary parameters. For instance, the aggregations could produce biased results (orness measure [2] not equal to 0:5) or even do not aggrehttps://doi.org/10.1016/j.ins.2021. 10.077 0020-0255/Ó 2021 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). gate enough information (low entropy measure [2]). In addition, the OWA operator constructed from these linear fuzzy linguistic quantifiers completely ignores the most extreme values in the aggregation process which could lead to non realistic aggregations. These biased or non realistic aggregations could be a major inconvenient in many real-world applications. For instance, in consensus processes for GDM [6,9], an OWA operator whose orness is greater than 0.5 tends to prioritize extreme values close to 1 regarding those close to 0, which is not reasonable because all of them are equally important. Furthermore, a theoretical consensus reached by completely ignoring the most extreme values would not be realistic. However, it has been proved that the less extreme information has a cohesive effect and facilitates the agreement among experts [14,15]. Therefore it seems reasonable to provide new ways of generating OWA weights which prioritize the intermediate information before the extreme data, as linear fuzzy linguistic quantifiers do, but taking into account more information in the aggregation process and avoiding biased aggregations in the results.
This work aims at solving three different problems.
i) First, it is necessary to clarify the relation between the biparametric family of linear fuzzy linguistic quantifiers [20] and the way that these quantifiers fuse the information.
ii) A new proposal is then required to deal with the limitations of these linear fuzzy linguistic quantifiers by keeping their simplicity and applicability [7] but allowing to generate weights which prioritize the intermediate information in a non biased way. iii) Finally, the abstract conditions required to generate the aforementioned weights are discussed.
Therefore, we raise the following research questions: RQ1: How do the parameters of the linear fuzzy linguistic quantifiers impact the aggregation of information? RQ2: How to fuse information in a more realistic way than by using the linear fuzzy linguistic quantifier? RQ3: What properties share those linguistic quantifiers whose associated OWA weights allow to fuse information symmetrically by prioritizing the intermediate values?
Consequently, this proposal analyzes the impact in the aggregation of information of the parameters of these widely extended linear fuzzy linguistic quantifiers [20] and shows their main shortcomings. We then propose an abstract novel extension of OWA operators which uses an Extreme Values Reduction (EVR) [4] as a fuzzy linguistic quantifier instead of the traditional linear fuzzy linguistic quantifier. The resulting operator is characterized for assigning weights to information depending on its degree of polarization, such that the most important values are the intermediate ones and their importance (weight) progressively decreases for the most extreme values. In addition, this novel EVR-OWA operator provides a non biased way to fuse information which takes into account almost as much information as the arithmetic mean operator, which is the one with higher entropy measure [2]. Furthermore, the generalizing potential of this EVR approach is highlighted by showing that any family of positive symmetric weights which prioritize the intermediate information, like the ones studied in [17], are actually EVR-OWA weights.
The structure of this contribution is as follows: Section 2 is a brief review about OWA operators and fuzzy linguistic quantifiers. In Section 3 the impact of the parameters of linear fuzzy linguistic quantifier in the aggregation of information is analyzed and the main shortcomings of this approach are exposed. Section 4 introduces the main proposal of this contribution, the EVR-OWA operator, and studies some of its main properties. Section 5 completes this proposal by providing several families of EVRs. In Section 6, the relation between symmetric weights for OWA operators and EVRs is studied. In Section 7, an illustrative example of aggregations by using this EVR-OWA operator is developed. Section 8 summarizes the main contributions of this work. Finally, Section 9 concludes this contribution.

Preliminaries
This section provides a brief review about OWA operators [20] and Yager's method [21] to compute their weights with a fuzzy linguistic quantifier, which is the starting point of our proposal. Finally, the notion of EVR [4] is introduced.

Ordered weighted averaging operators
The Ordered Weighted Averaging (OWA) operators [20] are a family of aggregation functions which generalizes the notion of arithmetic mean.
associated to x is given by: OWA operators have several remarkable properties such as the facts that they are idempotent non decreasing functions which are continuous, symmetric, homogeneous and shift-invariant [2].
There are different measures to study the behavior of an OWA Operator. Among the most extended measures are the arithmetic mean and the standard deviation of the weights. Other useful measures are the Orness and the entropy measures [2].
The orness measure of an OWA Operator quantifies the similarity between this OWA Operator and the maximum operator. It is given by And, when m is large enough [10]: When the coordinates of the weighting vector are increasing, i.e. w 1 6 w 2 6 . . . 6 w m ; orness U x ð Þ 2 . . . ; m [2]. In particular, the orness measure for the arithmetic mean operator, in which all the weights are the same, is equal to 1 2 . The entropy measure, or simply entropy, quantifies how much information is taken into account during the aggregation process. It is given by If no orness measure is specified, the weighting vector which maximizes the Entropy is the associated with the arithmetic mean operator [2].

Using fuzzy linguistic quantifiers to compute weights
Among others proposals [5] to compute OWA weights, Yager proposed the use of fuzzy linguistic quantifiers [22] to obtain the weights for OWA Operators [21].
Fuzzy linguistic quantifiers are fuzzy subsets Q : 0; 1 ½ ! 0; 1 ½ of the unit interval 0; 1 ½ and they were classified by Yager as follows [19]: Yager introduced the use of RIM quantifiers for generating the weights [21] according to the following formula: The OWA operators based on linear fuzzy linguistic quantifiers have been widely used in the literature [12]. One of the most common approaches consists of using the linear RIM quantifier Q a;b : 0; 1 ½ ! 0; 1 ½ given by: where a < b are two parameters in the interval 0; 1 ½ . Several consensus models for GDM problems [6,10] have used this method to compute the weights of their aggregations since they allow to adjust the importance of the intermediate infor-

Extreme values reductions
García-Zamora et al. [4] studied the effect of remapping experts' preferences by using non linear scales in consensus models for GDM. To do that, the notion of EVR was introduced as those automorphisms on the interval 0; 1 ½ , i.e. strictly increasing bijections which satisfy the boundary conditions D 0 ð Þ ¼ 0 and D 1 ð Þ ¼ 1. These functions are characterized by reducing the distance between the values which are close to 0 and 1. Formally: Definition 2 (Extreme Values Reduction [4]). Let b D : 0; 1 ½ ! 0; 1 ½ be a function satisfying: D is a function of class C 1 , i.e. it is differentiable and its derivative is continuous, D is convex in a neighborhood of 0 and concave in a neighborhood of 1, Then b D is called an Extreme Values Reduction (or EVR) on the interval 0; 1 ½ . It was shown [4] that these EVR functions satisfy the following properties.
be an EVR on 0; 1 ½ . Then: is a Restricted dissimilarity [3] and the function S b D : 0; 1 is a Restricted Equivalence Function [3]. 2. We can find three intervals I 1 ; I 2 ; I 3 & 0; 1 ½ such that 0 2 I 1 ; 1 2 I 3 , and I 1 < I 2 < I 3 satisfying that ð Þj < jy À xj8x; y 2 I 3 : xy: 3. The graph of b D is under the diagonal of the square 0; 1 ½ Â 0; 1 ½ for values close enough to 0 and it is over the same diagonal for those values close enough to 1, 4. There exist a neighborhood U 0 containing 0 and a neighborhood U 1 containing 1 such that for every x; y 2 U 0 ; x < y, there exists Þj holds for any t 2 0; h 0 ½ and for every x; y 2 U 1 ; x < y, there exists h 1 > 0 such that the inequality j Note that EVRs deform the unit interval in a very particular way. According to the second thesis of the previous Theorem, when using an EVR the distances between extreme values (those which are close to 0 or 1) are decreased, whereas the distances between certain intermediate values are increased. In addition, the forth thesis of that result guarantees that the distances among extreme values are progressively reduced when the values become closer to 0 or 1.

On the drawbacks of the linear RIM quantifier for computing OWA weights
In this section, it is analyzed the implications of choosing the weighting vector by using the linear RIM quantifier defined in the previous section.
Suppose 0 < a < b < 1. Then for any k 6 am we obtain x k ¼ 0. When k P mb þ 1, we also get w k ¼ 0. If ma > k but k À 1 6 ma we obtain In the case mb 6 k and k À 1 < mb: The remaining case am þ 1 < k < mb is reduced to: The previous discussion is summarized in the following result.
Proposition 1. The weights for the m-dimensional OWA operator obtained from the linear RIM quantifier Q a;b are given by where k ¼ 1; 2; . . . ; m.
Note that for any k 6 ma, all the weights w k are zero and the same occurs when k P bm þ 1. In terms of the OWA operator associated to these weights, this fact means that the operator ignores the first ma values and the last m À bm þ 1 ð Þvalues. In other words, the greater a, the less top ranked values are considered for the OWA aggregation. In the same way, the less value of b, the less bottom ranked values are considered for OWA aggregation. Depending on the choice of a and b, the ignored values could be high enough to declare non realistic any aggregation which is based in this linear RIM quantifier. Keep in mind, that OWA operators order the values to aggregate before applying the weights. So, the ignored information is the corresponding to the most extreme values (polarized, if polarization exist in such a set of values) among the elements to be aggregated. In addition, if non biased aggregations are required, the orness measure of the corresponding OWA operator must be equal to 0:5. For instance, if the orness measure is greater than this value, the aggregation would swing towards the maximum, giving more importance to the values greater than the median value of the elements which are being aggregated. The following result provides a relation between the parameters a and b which characterizes non biased aggregations. Proof. According to [2], the orness measure of an OWA operator equals to 0:5 if and only if the associated weights are symmetric. It is clear that if a þ b ¼ 1, the weights provided by the linear RIM quantifier Q a;b are symmetric and consequently the orness measure equals to 0:5.
To prove the reciprocal assumption, fix m 2 N and 0 6 a < b 1 fixed and consider where E þ : R ! Z denotes the ceiling function. Note that Proposition 1 allows to study the symmetry of the weights w 1 ; w 2 ; . . . ; w m by just looking at w k 1 and w k 2 . Therefore, if the weights are symmetric, the following chain of equalities holds: From the first one, the constrain k 1 À ma ¼ mb þ 1 À k 2 is obtained, whereas comparing the first term and the last one leads This fact induces a constraint for a and b. If they are not chosen in a symmetric way, i.e. b ¼ 1 À a, the aggregation gives more importance to certain extreme values, and this may have no sense when applied in some real world problems like consensus models for GDM.
Example 1. Consider a problem in which five experts express their preferences through the vector P ¼ P 1 ; P 2 ; P 3 ; P 4 ; P 5 ð Þ ¼1; 1; 0:75; 0:5; 0:5 ð Þ on how much they prefer the alternative X 1 to the alternative X 2 . When using the linear RIM quantifier Q a;b for a ¼ 0: For instance, when fusing the preferences in the previous problem, the aggregation of these preferences is 0:5 Á 0:75 þ 0:5 Á 0:5 ¼ 0:625, which is deviated from the median value of preferences, i.e. 0:75. Note that this deviation implies introducing a bias in the computations because equally extreme values of the preferences are not weighted in the same way: the distance between the median value and both P 2 and P 4 are the same, but this is not reflected in the aggregations made by this operator, which prioritize the value P 4 because it is lower than the median value.
In addition, note that we have just three possibilities for the value of w k : The fact that there are just a few possible values for the weights is somehow against the fuzzy logic view. It should be convenient that the values of the weights change smoothly from the minimum possible value to the maximum one, instead of changing drastically from zero to 1 m bÀa ð Þ as it occurs with the weights associated to the linear RIM quantifier Q a;b . To summarize, the main shortcomings of the linear RIM quantifier (see Fig. 1) are: If a is too high or b is too low, the aggregations are non realistic because we are ignoring too many extreme values, If a þ b -1 the results of the aggregations are biased, The obtained weights are against the fuzzy logic philosophy. Therefore, we propose an alternative method to select such OWA weights which guarantees not only to take into account the more extreme values, but also allows the user to control the relevance given to these values. Our aim is to aggregate elements in a more realistic non biased way.

An OWA operator based on Extreme Values Reductions
This section presents the main novelty of this contribution, namely EVR-OWA operator. This operator is based in the notion of Extreme Values Reduction, detailed in SubSection 2.3. The properties of EVRs are applied to construct an OWA Operator which has similar measures to the arithmetic mean but giving more importance to the intermediate values and less importance to the more extreme values to smooth out the importance of polarized opinions in GDM, but taking them into account instead of ignoring them.
Let us start by analysing the fourth thesis of Theorem 1. Consider an EVR b D : 0; 1 ½ ! 0; 1 ½ which is convex in 0; 0:5 ½ and concave in 0:5; 1 ½ . Suppose we have a partition of the interval 0; 1 ½ . For instance, we can take m 2 N and define In other words, the more closer x k is to 0, the smaller the difference j A similar reasoning leads us to the concave counterpart of this: the more closer x k is to 1, the smaller the difference j b D x k ð Þ À b D x k À 1 m À Á j. This reasoning and the fact that EVRs are RIM quantifiers allow to define the weights of an OWA operator by using the general scheme introduced by Yager [21].
D be an Extreme Values Reduction and consider m 2 N. We define The family W ¼ w 1 ; w 2 ; . . . ; w m f g receives the name of order m weights associated with the EVR b D, and the OWA-operator where r is a permutation of the m-tuple 1; 2; . . . ; m We highlight the philosophy behind this operator. Let us analyze some properties of these weights. First note that since b D is strictly increasing, all of these weights are greater than zero. In addition, and they are properly defined.
By using the third condition of the EVR definition, i.
This symmetry and the fifth property of EVRs, used as we have explained before, give us an idea of the distribution of these weights. On the one hand, the smallest values of w k are always located at k ¼ 1 and k ¼ m, i.e.
We know that these weights are matched by pairs. So there is a minimum value at w 1 and the values of the weights strictly increase until a certain maximum value w max and then, because of the symmetry w k ¼ w mÀkþ1 , the values of the weights start to decrease towards the value w m ¼ w 1 . The maximum value for w k depends on the parity of m. When m is even, the maximum value is at k ¼ m 2 due to wm When m is odd, the maximum value is at k ¼ mþ1 2 : Let us remark here that w max is a kind of median value for the weights. Note that the arithmetic mean of these weights is and the orness measure of the corresponding OWA operator must be equal to 0:5 because the weights are symmetric [1]. So orness U b D ¼ 0:5 for any EVR b D and any m 2 N.
To conclude this section, the main advantages of EVR-OWA operators are summarized (see Fig. 2). First, since their orness measure is equal to 0:5, the weights are symmetrically distributed and the aggregations give equal importance to the extreme values. In addition, all the weights are positive and the most extreme values are always considered in the aggregation process. Finally, they are simple to compute and the corresponding EVR can be used as RIM Quantifier in any scenario in which weights are required to prioritize intermediate values.

Examples of EVR-OWA operators
In this section several families of EVRs are introduced, complementing the examples of EVAs proposed in [4]. For the EVR-OWA operator associated to these families, their main measures, namely the arithmetic mean, the orness measure, the standard deviation and the entropy measure, are studied.
The following result summarizes the performance of the EVR-OWA operator associated toŝ a regarding its main measures: Proof. The two first items are consequence of the discussion made in the previous section. In order to show the third one, let us study the difference jw k À wj: where n is given by the Mean Value Theorem. Therefore: jw k À wj 6 a 2p m 8k 2 1; 2; . . . ; m f g and therefore the standard deviation of the w k 's is bounded by which is a small value for m highly enough and a 20; 1 2p ½. For m ¼ 100, Table 1 shows the calculations of the most standard measures for the weights obtained for different values of a (keep in mind that for a ¼ 0 we get the weights associated to the arithmetic mean operator). In Fig. 3 we show the comparison between values of the weights obtained for several values of a.
In order to analyze the behavior of the entropy measure for this EVR-OWA operator, we provide the plot in Fig. 4. For each fuzzy linguistic quantifier, namely the identity function (arithmetic mean),ŝ 0:08 ;ŝ 0:15 and Q 0:2;0:8 , the values of the entropy measure of the corresponding OWA operator have been computed for m ¼ 2; 3; . . . ; 1000. The graph shows that the OWA operators constructed from these EVRs present higher entropy than the linear RIM quantifier Q 0:2;0:8 .

The EVR-OWA associated tom a
Let a > 1 and consider the EVRm a : 0; 1 : Note thatm a is not an EVR strictly speaking since it is not differentiable in x ¼ 1 2 . However, and therefore there is no problem with considering it as an EVR.
The following result summarizes the performance of the EVR-OWA operator associated tom a regarding its main measures:  Proof. The two first items are consequence of the discussion made in the previous section. In order to show the third one, let us study the difference jw k À wj. To do that, let us consider the function g : 0:5; 1 ½ ! 0:5; 1 ½ given by g x ð Þ ¼m a x ð Þ À x8x 2 0:5; 1 ½ , which reach its maximum value at x 0 ¼ 1 2 ffiffi 1 a aÀ1 q þ 1 . In that case: jm a x ð Þ À xj 6 g x 0 ð Þ ¼ 1 2 ffiffiffi a aÀ1 p 1 À 1 a : Now we can compute:    In order to study h 0 , note that for any A & R þ and f : A ! R, the derivative of the function g : A ! R defined by So h is increasing and r a 20; 1½ for any value of a.
For m ¼ 100 we show the results of the calculations for most standard measures for the weights in Table 2 (in this case, the weights associated to the arithmetic mean operator are given by a ¼ 1). In Fig. 5 we show the comparison between values of the weights obtained for the different values of a. In order to analyze the behavior of the entropy measure for this EVR-OWA operator, we provide the plot in Fig. 6. For each fuzzy linguistic quantifier, namely the identity function (arithmetic mean),m 1:5 ;m 2 ;m 3 and Q 0:2;0:8 , the values of the entropy measure of the corresponding OWA operator have been computed for m ¼ 2; 3; . . . ; 1000. The graph shows that the OWA operators constructed from these EVRs present higher entropy that the linear RIM quantifier Q 0:3;0:7 .
( is an EVR if and only if the following equality holds where k is the derivative of h a and indicates how much the intermediate values further apart. Some useful combinations of these parameters are shown in Table 3. Let us analyse the special case r ¼ s ¼ 1=2. Note that this assumption implies that the affine transformation h a disappears Fig. 6. Graph of the entropy function for several fuzzy linguistic quantifiers. Then, for every k 21; 1½ we can find 20; 1½ (i.e. the unique one which satisfies k ¼ a aÀ1 À a 1À a ) such that the function b k a : 0; 1 Note that by taking ¼ 0 we would obtain them a family of EVRs. It should be highlighted that the previous reasoning allows to deduce the existence and unicity of a value which guarantees that the EVRs b r;s a is well defined for 1 2 < r < s < 1 and a 20; 1½.
In the following, we use the notation b r;s a for the unique EVR which could be defined for the parameters r < s and a.
The quality measures for the OWA weights generated for some of these EVRs for the order 100 case are sumarised in Table 4 and the respectively obtained weights are sketched in Fig. 7. Note that when using the familyb r;s a all the weights in the interval 0:5 À r; 0:5 þ r ½ remain the same and their value depend on the value of sÀ0:5 rÀ0:5 . On the other hand, the family Table 4 Measures forb r;s a , m ¼ 100. On the other hand, the family m a resulted to be a particular case of the familyb r;s a and provided a simple way of generating OWA weights which give much more importance to the intermediate values than to the extreme ones. Finally, the familyb a allows to control the amount of intermediate values which receive the higher relevance in the aggregations and the exact weights for these values.

Symmetric weights and EVRs
In Section 4 it was proved that any EVR is able to produce a family of symmetric weights which give more importance to the intermediate values in the OWA aggregations. This section is devoted to show the reciprocal statement, i.e., to some extent, any family of symmetric weights which prioritizes intermediate values is obtained from an EVR function. First, it is proposed a result which highlights an interesting property relating the weights associated to a fuzzy linguistic quantifier Q and its derivative.
where n k 2 kÀ1 m ; k m ½. In addition, Proof. It is consequence of the Mean Value Theorem and the Fundamental Theorem of Calculus.
The following Theorem provides the reciprocal statement, i.e., given w 1 ; . . . ; w m , under certain conditions Theorem 2 assures that we can find an EVR such that, when Yager's method is applied to compute the associated OWA weights, the obtained values for these weights are precisely w 1 ; . . . ; w m . Then, the function Q : 0; 1 ½ ! 0; 1 ½ given by Q x ð Þ :¼ R x 0 q t ð Þdt is an EVR whose associated EVR-OWA operator is determined by the weighting vector w.
Proof. Let us check the properties which characterizes EVRs.
1. Q is a function of class C 1 . Clear by using the Fundamental Theorem of Calculus. 2. Q is an increasing automorphism. Since q P 0; Q must be an increasing function. The 4th hypothesis guarantees that in fact, Q is strictly increasing. In addition, 3. Involution with respect to the standard negation. Note that Therefore, when using Yager's method to compute the weights for the RIM quantifier Q, the obtained family is the former one. Let us analyze the hypotheses of the previous result. The first one is just about relating the derivative of the EVR Q with the given weights such that when using Proposition 8 to compute the corresponding weights, the obtained values are w 1 ; . . . ; w m . The second one is a symmetry condition. The third condition is easy to obtain because the minimum values for a family of weights which prioritizes intermediate values must be lower than the arithmetic mean of these weights, i.e. w 1 < 1 m and w m < 1 m . The last hypothesis is related with both the symmetry condition and the fact that the weights for the intermediate values should be higher than the weights for the more extreme values.
In order to construct the function q : 0; 1 ½ !R þ 0 required in the previous result, we suggest using a continuous linear spline such that its restriction to the interval kÀ1 m ; k m Â Ã , q k :¼ qj kÀ1 . . . ; m: If m is even we can consider the functions defined by q k t . . . ; m 2 , which provide a total of m parameters. By considering the equations the boundary condition q 1 0 ð Þ ¼ a 1 ¼ 0, and the continuity conditions We obtain a total of m linear constraints. The matrix representation of this linear system is . .
The determinant of this m Â m matrix is given by which is non zero. Therefore the linear system of equations has a unique solution. By using this solution to determine the functions q 1 ; q 2 ; . . . ; qm 2 a piecewise linear function q : 0; 1 2 Â Ã ! R þ 0 is obtained. This piecewise function can be extended to the entire interval 0; 1 ½ by defining q : If m is odd, a similar reasoning can be developed by considering the system of equations consisting on the boundary condition q 1 0 ð Þ ¼ a 1 ¼ 0, and the continuity conditions . . . m À 1 2 which consists of m þ 1 parameters and m þ 1 linear constraints. The matrix representation of this linear system is The determinant of this m Â m matrix is given by

An illustrative case of aggregation
In this section an illustrative example is provided in order to show in a practical environment the theoretical contents of this work.
Suppose a group consisting on 10 experts who give their opinions on how much they prefer a certain alternative x 1 to the alternative x 2 . These preferences are denoted as p k :¼ p k 1;2 2 0; 1 ½ ; k ¼ 1; 2; . . . ; 10 and are given as follows:  Table 5 compiles the standard measures for the weighting vectors obtained for different fuzzy linguistic quantifiers. Fig. 9 shows the distribution of the weighting vector of order 10 associated with the EVRŝ 0:08 around its mean value. Table 5 shows that the OWA operator associated with the EVRŝ a provides, in general, values for the entropy measure which are greater than the obtained for the OWA operator associated with the linear RIM quantifier Q a;b . Consequently, the aggregations made by using the EVR-OWA operator take into account more information than the aggregations based on the linear RIM quantifiers Q 0:2;0:8 or Q 0:3;0:7 , which are commonly used in the literature. In addition, the Standard Deviations obtained for the EVR-OWA operators are lower than the obtained for the linear RIM quantifiers Q 0:2;0:8 and Q 0:3;0:7 . Furthermore, whereas the weights associated with linear RIM quantifiers just take the minimum value (0) or the maximum value (0:125; 0:17 or 0:25, depending on the case), the weights which are computed by using the EVRŝ a take different values in the interval defined by the respective minimum and maximum.
Note that the median for the preferences is 0.3097119. When using the linear RIM quantifier Q a;b , a reduction in the quantity b À a is translated in a value for the collective preference which is closer to the median preference and farther from the collective preference given by the arithmetic mean operator. However, when using the EVR-OWA operator associated with the EVRŝ a , changes in the parameter a allows to control the importance given to the more extreme values without losing too much information in the aggregation process (higher entropy measure) and thus obtaining values for the collective preference similar to the obtained with the arithmetic mean operator.

Results and discussion
Currently, RIM quantifiers are widely used in order to compute the weights for OWA operators due to their simplicity and applications in several contexts such as poset environments [7]. Even though, several families of RIM quantifiers have been proposed in the specialized literature [17], one of the most extended approaches consists of using linear RIM quantifiers Q a;b : 0; 1 ½ ! 0; 1 ½ , where 0 6 a < b 6 1, to define OWA operators.
First, this study has analyzed the limitations of using these linear RIM quantifiers to generate OWA weights (RQ1). To do so, the impact of the parameters a and b which define this quantifier has been measured by checking out the corresponding standard quality measures. The obtained results are: If a is too high or b is too low the aggregations are non realistic because too many extreme values are ignored, i.e. the entropy of the obtained OWA operator is too low. If a þ b -1 the aggregations are biased, i.e., the orness measure of the obtained operator is not 0:5, and therefore they are not suitable for those real world applications which require symmetry around the median value. With just two possible exceptions, the value w k is either zero or the constant 1 m bÀa ð Þ . Therefore, the progressive behavior of the weighting vector calculated by using the EVR approach is more related to the fuzzy environment we are dealing with: whereas the classical linear RIM quantifier Q a;b completely ignores the most extreme elements, the use of EVR based RIM quantifiers that is proposed here provides a progressive reduction of the importance we are giving to the more extreme values, which is closer to the fuzzy philosophy.
Second, the EVR-OWA operator, which is focused on overcoming the limitations of the OWA operator, has been introduced. Several families of EVRs have been proposed and it has been proved that their corresponding EVR-OWA operators satisfy: Symmetry: w 1 ¼ w m ; w 2 ¼ w mÀ1 ; w 3 ¼ w mÀ2 ,. . ., No w k is zero, The smallest values of the w k 's are the first and, by symmetry, the last ones. The largest values are the intermediate ones, Their arithmetic mean is 1=m, Their standard deviation is close to zero, Their orness measure is 0:5, Their entropy is similar to the entropy of the arithmetic mean operator, which is the maximum possible value.
Therefore, OWA operators based in EVRs show a considerably higher performance than those based on linear RIM quantifiers, but keeping their simplicity and applicability (RQ2).
Finally, it has been proved (Theorem 2) that any family of positive symmetric weights which prioritize the intermediate information is actually the family of OWA weights obtained from a certain EVR (RQ3).
To sum up, this study has highlighted the main disadvantages of the use of linear RIM quantifiers in OWA aggregations, concluding that these kinds of weights do not show a high quality when non biased aggregations which prioritize intermediate information without ignoring the most extreme values are required. The EVR-OWA operator, which is based on using EVRs as fuzzy linguistic quantifiers, allows to overcome this disadvantages by providing families of positive symmetric weights which prioritize the intermediate information. Furthermore, it has been proved that any other family of weights which satisfies these requirements can be obtained from a certain EVR, which guarantees that EVRs are actually a very complete proposal to generate weights with such properties.

Conclusions and future research
Linear RIM quantifiers are widely accepted in specialized literature [7,17], but they present several shortcomings. The novel EVR-OWA operator proposes using EVRs as fuzzy linguistic quantifiers in order to overcome these limitations by obtaining an OWA operator which takes into account the more extreme values, but gives more importance to the intermediate ones.
The aggregations made by EVR-OWA operators are not only more related with the fuzzy logic view than those done by using the linear RIM quantifier Q a;b , but are also better for certain real world applications such as consensus models for GDM [6,9], since they aggregate preferences in a non biased way and allow to take into account more information in the aggregation process.
In addition, the abstract nature of this proposal not only provides a simple and general method to obtain OWA weights with such properties, but also gives a characterization relating those families of symmetric positive OWA weights which prioritize intermediate values and EVRs: for every weighting vector w with these properties it can be found an EVR function such that the weighing vector for this EVR is the weighting vector w.
Further studies should be related with the behavior of these weights in large scale GDM problems and consensus models in which preferences tends to be polarized. In addition, it would be interesting to compute the optimal parameters for the proposed EVR families which maximize the entropy measure of the aggregations under certain constrains. Other research line could be to extend the proposed methods to other contexts such as Pythagorean fuzzy uncertain environments [16] or ELICIT information [8].

Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.