Skip to main content
Log in

An Analysis of Winsorized Weighted Means

  • Published:
Group Decision and Negotiation Aims and scope Submit manuscript

Abstract

The Winsorized mean is a well-known robust estimator of the population mean. It can also be seen as a symmetric aggregation function (in fact, it is an ordered weighted averaging operator), which means that the information sources (for instance, criteria or experts’ opinions) have the same importance. However, in many practical applications (for instance, in many multiattribute decision making problems) it is necessary to consider that the information sources have different importance. For this reason, in this paper we propose a natural generalization of the Winsorized means so that the sources of information can be weighted differently. The new functions, which we will call Winsorized weighted means, are a specific case of the Choquet integral and they are analyzed through several indices for which we give closed-form expressions: the orness degree, k-conjunctiveness and k-disjunctiveness indices, veto and favor indices, Shapley values and interaction indices. We also provide a closed-form expression for the Möbius transform and we show how we can aggregate data so that each information source has the desired weighting and outliers have no influence in the aggregated value.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It is worth noting that the choice of the weight distribution has generated a large literature [in the case of OWA operators, see, for instance, Llamazares (2007), Liu (2011), Bai et al (2017) and Lenormand (2018)].

  2. In this framework, outliers may be due to the fact that the same subject may have been taught by different teachers, students may have been ill, or they may have copied answers, etc.

  3. There is an abundant literature on this subject; see, for instance, Iglewicz and Hoaglin (1993), Barnett and Lewis (1994), Wilcox and Keselman (2003), Seo (2006) and Aggarwal (2017).

  4. In general, the pair (rs) is obtained by taking \(r=\max r_i\) and \(s=\max s_i\), where \((r_i,s_i)\) is the pair used for removing the outlier of the ith alternative.

  5. Notice that we use the convention \(\sum _{t=p}^{q} f(t) = 0\) when \(p> q\).

References

  • Aggarwal CC (2017) Outlier analysis, 2nd edn. Springer, Cham

    Book  Google Scholar 

  • Bai C, Zhang R, Song C, Wu Y (2017) A new ordered weighted averaging operator to obtain the associated weights based on the principle of least mean square errors. Int J Intell Syst 32(3):213–226

    Article  Google Scholar 

  • Barnett V, Lewis T (1994) Outliers in statistical data, 3rd edn. Wiley, Chichester

    Google Scholar 

  • Beliakov G (2018) Comparing apples and oranges: the weighted OWA function. Int J Intell Syst 33(5):1089–1108

    Article  Google Scholar 

  • Beliakov G, Dujmović J (2016) Extension of bivariate means to weighted means of several arguments by using binary trees. Inf Sci 331:137–147

    Article  Google Scholar 

  • Choquet G (1953) Theory of capacities. Ann Inst Fourier 5:131–295

    Article  Google Scholar 

  • Denneberg D (1994) Non-additive measures and integral. Kluwer Academic Publisher, Dordrecht

    Book  Google Scholar 

  • Dixon WJ (1960) Simplified estimation from censored normal samples. Ann Math Stat 31(2):385–391

    Article  Google Scholar 

  • Dubois D, Koning JL (1991) Social choice axioms for fuzzy set aggregation. Fuzzy Sets Syst 43(3):257–274

    Article  Google Scholar 

  • Grabisch M (1995) Fuzzy integral in multicriteria decision making. Fuzzy Sets Syst 69(3):279–298

    Article  Google Scholar 

  • Grabisch M (1997) \(k\)-order additive discrete fuzzy measures and their representation. Fuzzy Sets Syst 92(2):167–189

    Article  Google Scholar 

  • Grabisch M (2016) Set functions, games and capacities in decision making, theory and decision library, series C, vol 46. Springer, Berlin

    Book  Google Scholar 

  • Grabisch M, Labreuche C (2010) A decade of application of the Choquet and Sugeno integrals in multi-criteria decision aid. Ann Oper Res 175(1):247–286

    Article  Google Scholar 

  • Grabisch M, Labreuche C (2016) Fuzzy measures and integrals in MCDA. In: Greco S, Ehrgott M, Figueira RJ (eds) Multiple criteria decision analysis: state of the art surveys, international series in operations research and management science, vol 233, 2nd edn. Springer, New York, pp 553–603

    Chapter  Google Scholar 

  • Grabisch M, Marichal J, Mesiar R, Pap E (2009) Aggregation functions. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Harsanyi JC (1959) A bargaining model for cooperative \(n\)-person games. In: Tucker AW, Luce RD (eds) Contributions to the theory of games. Vol. 4, Annals of Mathematics Studies, vol 40. Princeton University Press, Princeton, pp 325–355

    Google Scholar 

  • Heilpern S (2002) Using Choquet integral in economics. Stat Pap 43(1):53–73

    Article  Google Scholar 

  • Hoitash U, Hoitash R (2009) Conflicting objectives within the board: evidence from overlapping audit and compensation committee members. Group Decis Negot 18(1):57–73

    Article  Google Scholar 

  • Huber PJ, Ronchetti EM (2009) Robust statistics, 2nd edn. Wiley, Hoboken

    Book  Google Scholar 

  • Iglewicz B, Hoaglin D (1993) How to detect and handle outliers. ASQC Quality Press, Milwaukee

    Google Scholar 

  • Komorníková M, Mesiar R (2011) Aggregation functions on bounded partially ordered sets and their classification. Fuzzy Sets Syst 175(1):48–56

    Article  Google Scholar 

  • Lenormand M (2018) Generating OWA weights using truncated distributions. Int J Intell Syst 33(4):791–801

    Article  Google Scholar 

  • Leys C, Ley C, Klein O, Bernard P, Licata L (2013) Detecting outliers: do not use standard deviation around the mean, use absolute deviation around the median. J Exp Soc Psychol 49(4):764–766

    Article  Google Scholar 

  • Liu X (2011) A review of the OWA determination methods: classification and some extensions. In: Yager RR, Kacprzyk J, Beliakov G (eds) Recent developments in the ordered weighted averaging operators: theory and practice. Springer, Berlin, pp 49–90

    Chapter  Google Scholar 

  • Llamazares B (2007) Choosing OWA operator weights in the field of social choice. Inf Sci 177(21):4745–4756

    Article  Google Scholar 

  • Llamazares B (2013) An analysis of some functions that generalizes weighted means and OWA operators. Int J Intell Syst 28(4):380–393

    Article  Google Scholar 

  • Llamazares B (2015a) Constructing Choquet integral-based operators that generalize weighted means and OWA operators. Inf Fus 23:131–138

    Article  Google Scholar 

  • Llamazares B (2015b) A study of SUOWA operators in two dimensions. Math Probl Eng 2015: Article ID 271,491

  • Llamazares B (2016a) A behavioral analysis of WOWA and SUOWA operators. Int J Intell Syst 31(8):827–851

    Article  Google Scholar 

  • Llamazares B (2016b) SUOWA operators: constructing semi-uninorms and analyzing specific cases. Fuzzy Sets Syst 287:119–136

    Article  Google Scholar 

  • Llamazares B (2018a) Closed-form expressions for some indices of SUOWA operators. Inf Fus 41:80–90

    Article  Google Scholar 

  • Llamazares B (2018b) Construction of Choquet integrals through unimodal weighting vectors. Int J Intell Syst 33(4):771–790

    Article  Google Scholar 

  • Llamazares B (2019a) SUOWA operators: a review of the state of the art. Int J Intell Syst 34(5):790–818

    Article  Google Scholar 

  • Llamazares B (2019b) SUOWA operators: an analysis of their conjunctive/disjunctive character. Fuzzy Sets Syst 357:117–134

    Article  Google Scholar 

  • Marichal JL (1998) Aggregation operators for multicriteria decision aid. Ph.D. thesis, University of Liège

  • Marichal JL (2004) Tolerant or intolerant character of interacting criteria in aggregation by the Choquet integral. Eur J Oper Res 155(3):771–791

    Article  Google Scholar 

  • Marichal JL (2007) \(k\)-intolerant capacities and Choquet integrals. Eur J Oper Res 177(3):1453–1468

    Article  Google Scholar 

  • Murofushi T, Soneda S (1993) Techniques for reading fuzzy measures (iii): interaction index. In: Proceedings of the 9th fuzzy systems symposium, Sapporo (Japan), pp 693–696

  • Murofushi T, Sugeno M (1991) A theory of fuzzy measures. Representation, the Choquet integral and null sets. J Math Anal Appl 159(2):532–549

    Article  Google Scholar 

  • Murofushi T, Sugeno M (1993) Some quantities represented by the Choquet integral. Fuzzy Sets Syst 56(2):229–235

    Article  Google Scholar 

  • Owen G (1972) Multilinear extensions of games. Manag Sci 18(5–part–2):64–79

    Article  Google Scholar 

  • Riordan J (1968) Combinatorial identities. Wiley, New York

    Google Scholar 

  • Rota GC (1964) On the foundations of combinatorial theory I. Theory of Möbius functions. Z Wahrscheinlichkeitstheorie Verwandte Geb 2(4):340–368

    Article  Google Scholar 

  • Seo S (2006) A review and comparison of methods for detecting outliers in univariate data sets. Master’s thesis, University of Pittsburgh

  • Shapley LS (1953) A value for \(n\)-person games. In: Kuhn H, Tucker AW (eds) Contributions to the theory of games, vol 2. Princeton University Press, Princeton, pp 307–317

    Google Scholar 

  • Sugeno M (1974) Theory of fuzzy integrals and its applications. Ph.D. thesis, Tokyo Institute of Technology

  • Torra V (1997) The weighted OWA operator. Int J Intell Syst 12(2):153–166

    Article  Google Scholar 

  • Tukey JW (1977) Exploratory data analysis. Addison-Wesley, Reading

    Google Scholar 

  • Wainer H (1976) Robust statistics: a survey and some prescriptions. J Educ Stat 1(4):285–312

    Article  Google Scholar 

  • Wilcox RR (2012) Modern statistics for the social and behavioral sciences: a practical introduction. CRC Press, Boca Raton

    Google Scholar 

  • Wilcox RR, Keselman HJ (2003) Modern robust data analysis methods: measures of central tendency. Psychol Methods 8(3):254–274

    Article  Google Scholar 

  • Yager RR (1988) On ordered weighted averaging operators in multicriteria decision making. IEEE Trans Syst Man Cybern 18(1):183–190

    Article  Google Scholar 

  • Zhang Z, Xu Z (2014) Analysis on aggregation function spaces. Int J Uncertain Fuzziness Knowl Based Syst 22(05):737–747

    Article  Google Scholar 

Download references

Acknowledgements

The author is grateful to two anonymous referees for valuable suggestions and comments. This work is partially supported by the Spanish Ministry of Economy and Competitiveness (Project ECO2016-77900-P) and ERDF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bonifacio Llamazares.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Proofs

Proofs

We first recall the definition and some properties of binomial coefficients (see, for instance, Riordan (1968, pp. 1–3) and Grabisch (2016, p. 3).

Remark 2

Let \(m\in \mathbb {N}\) and \(k \in \mathbb {Z}\). Then:

  1. 1.

    \(\displaystyle \left( {\begin{array}{c}m\\ k\end{array}}\right) ={\left\{ \begin{array}{ll} \displaystyle \frac{m!}{k!(m-k)!} &{} \text {if } 0\le k \le m,\\ 0 &{} \text {otherwise}. \end{array}\right. }\)

  2. 2.

    \(\displaystyle \left( {\begin{array}{c}m\\ k\end{array}}\right) = \left( {\begin{array}{c}m\\ m-k\end{array}}\right) \).

  3. 3.

    If \(0 \le k \le m\), then \(\displaystyle \sum _{j=0}^{k} (-1)^{j} \left( {\begin{array}{c}m\\ j\end{array}}\right) = (-1)^{k} \left( {\begin{array}{c}m-1\\ k\end{array}}\right) \).

The following simple remarks on summation will be useful in some of the proofs.

Remark 3

Let \(p,q\in \mathbb {N}\), with \(p\le q+1\).Footnote 5 Then:

$$\begin{aligned} \sum _{t=p}^{q} t = \sum _{t=1}^{q} t - \sum _{t=1}^{p-1} t = \frac{q(q+1) - p(p-1)}{2}. \end{aligned}$$

Remark 4

Let \(\varvec{p}\) be a weighting vector. If \(\varnothing \subsetneq A \subseteq N\) and \(t\ge 1\), then

$$\begin{aligned} \sum _{\begin{array}{c} T\subseteq A \\ |T|=t \end{array}} \sum _{i\in T} p_i = \left( {\begin{array}{c}|A|-1\\ t-1\end{array}}\right) \sum _{i\in A} p_i. \end{aligned}$$

In particular, when \(A = N\) we have

$$\begin{aligned} \sum _{\begin{array}{c} T\subseteq N \\ |T|=t \end{array}} \sum _{i\in T} p_i = \left( {\begin{array}{c}n-1\\ t-1\end{array}}\right) \sum _{i=1}^n p_i = \left( {\begin{array}{c}n-1\\ t-1\end{array}}\right) =\left( {\begin{array}{c}n\\ t\end{array}}\right) \frac{t}{n}. \end{aligned}$$

Remark 5

Let \(\varvec{p}\) be a weighting vector and \(j\in N\). If \(t\ge 1\), then

$$\begin{aligned} \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j\} \\ |T|=t \end{array}} \sum _{i\in T} p_i = \left( {\begin{array}{c}n-2\\ t-1\end{array}}\right) \sum _{\begin{array}{c} i=1 \\ i\ne j \end{array}}^{n} p_i = \left( {\begin{array}{c}n-2\\ t-1\end{array}}\right) (1-p_{j}) = \left( {\begin{array}{c}n-1\\ t\end{array}}\right) \frac{t(1-p_{j})}{n-1}. \end{aligned}$$

Remark 6

Let \(\varvec{p}\) be a weighting vector and \(j,k\in N\). If \(t\ge 1\), then

$$\begin{aligned} \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j,k\} \\ |T|=t \end{array}} \sum _{i\in T} p_i&= \left( {\begin{array}{c}n-3\\ t-1\end{array}}\right) \sum _{\begin{array}{c} i=1 \\ i\ne j,k \end{array}}^{n} p_i = \left( {\begin{array}{c}n-3\\ t-1\end{array}}\right) (1-p_{j}-p_{k})\\&= \left( {\begin{array}{c}n-2\\ t\end{array}}\right) \frac{t(1-p_{j}-p_{k})}{n-2}. \end{aligned}$$

Proof of Proposition 1

Let \(\varvec{p}\) be a weighting vector and \((r,s)\in \mathcal {R}\). Since

$$\begin{aligned} {{\,\mathrm{orness}\,}}\!\big (M_{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-1}\sum _{t=1}^{n-1}\frac{1}{\left( {\begin{array}{c}n\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N \\ |T|=t \end{array}} \mu _{\varvec{p}}^{(r,s)}(T), \end{aligned}$$

and \(\mu _{\varvec{p}}^{(r,s)}\) is given by expression (1) (or expression (2) when \(r+s=n-1\)), we distinguish two cases:

  1. 1.

    If \(r+s=n-1\), then

    $$\begin{aligned} {{\,\mathrm{orness}\,}}\big (M_{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-1}\sum _{t=s+1}^{n-1} 1 = \frac{n-s-1}{n-1} = \frac{r}{n-1}. \end{aligned}$$
  2. 2.

    If \(r+s<n-1\), then, by Remarks 4 and 3, we have

    $$\begin{aligned}&{{\,\mathrm{orness}\,}}\big (M_{\varvec{p}}^{(r,s)}\big )\\&\quad = \frac{1}{n-1} \left( \sum _{t=s+1}^{n-r-1}\frac{1}{\left( {\begin{array}{c}n\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N \\ |T|=t \end{array}} \sum _{i\in T} p_i + r \right) \\&\quad = \frac{1}{n-1}\left( \frac{1}{n}\sum _{t=s+1}^{n-r-1} t + r \right) \\&\quad = \frac{1}{n-1}\left( \frac{(n-r)(n-r-1) - s(s+1)}{2n} + r \right) \\&\quad = \frac{1}{n-1} \frac{n(n-1) + r(r+1) - s(s+1)}{2n}\\&\quad = \frac{1}{2} + \frac{r(r+1) - s(s+1)}{2n(n-1)}. \end{aligned}$$

    Notice that, when \(r+s=n-1\), the previous expression returns \(r/(n-1)\). So, it is also valid in the case \(r+s=n-1\).

\(\square \)

Proof of Proposition 2

Let \(\varvec{p}\) be a weighting vector, \((r,s)\in \mathcal {R}\), and \(k\in N{\setminus } \{n\}\). Since

$$\begin{aligned} {{\,\mathrm{conj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big ) = 1 - \frac{1}{n-k}\sum _{t=1}^{n-k}\frac{1}{\left( {\begin{array}{c}n\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N \\ |T|=t \end{array}}\mu _{\varvec{p}}^{(r,s)}(T), \end{aligned}$$

and \(\mu _{\varvec{p}}^{(r,s)}\) is given by expression (1) (or expression (2) when \(r+s=n-1\)), we distinguish the following cases:

  1. 1.

    If \(n-k \le s\) (or, equivalently, \(k\ge n-s\)), then

    $$\begin{aligned} {{\,\mathrm{conj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big ) = 1. \end{aligned}$$
  2. 2.

    If \(s< n-k < n-r\) (or, equivalently, \(r< k < n-s\)), by Remarks 4 and 3 we have

    $$\begin{aligned} {{\,\mathrm{conj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big )&= 1 - \frac{1}{n-k} \sum _{t=s+1}^{n-k}\frac{1}{\left( {\begin{array}{c}n\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N \\ |T|=t \end{array}} \sum _{i\in T} p_{i} = 1 - \frac{1}{n-k} \sum _{t=s+1}^{n-k} \frac{t}{n} \\&= 1 - \frac{1}{n-k} \frac{(n-k)(n-k+1) - s(s+1)}{2n}\\&= 1 - \frac{n-k+1}{2n} + \frac{s(s+1)}{2n(n-k)} \\&= \frac{n+k-1}{2n} + \frac{s(s+1)}{2n(n-k)}\\&= \frac{n(n-1) + s(s+1) - k(k-1)}{2n(n-k)}. \end{aligned}$$
  3. 3.

    If \(n-k \ge n-r\) (or, equivalently, \(k \le r\)), we distinguish two cases:

    1. (a)

      If \(r+s=n-1\), then

      $$\begin{aligned} {{\,\mathrm{conj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big ) = 1 - \frac{1}{n-k} \sum _{t=n-r}^{n-k} 1 = 1 - \frac{r-k+1}{n-k} = \frac{n-(r+1)}{n-k}. \end{aligned}$$
    2. (b)

      If \(r+s<n-1\), then

      $$\begin{aligned}&{{\,\mathrm{conj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big )\\&= 1 - \frac{1}{n-k} \left( \sum _{t=s+1}^{n-r-1}\frac{1}{\left( {\begin{array}{c}n\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N \\ |T|=t \end{array}} \sum _{i\in T} p_{i} + r-k+1 \right) \\&= 1 - \frac{1}{n-k} \left( \sum _{t=s+1}^{n-r-1} \frac{t}{n} + r-k+1 \right) \\&= 1 - \frac{1}{n-k} \left( \frac{(n-r)(n-r-1)-s(s+1)}{2n} + r-k+1 \right) \\&= 1 - \frac{1}{n-k} \left( \frac{n+1-2k}{2} + \frac{r(r+1)-s(s+1)}{2n} \right) \\&= \frac{n(n-1) + s(s+1) - r(r+1)}{2n(n-k)} . \end{aligned}$$

      Notice also that, when \(r+s=n-1\), the previous expression returns \((n-r-1)/(n-k)\). So, it is also valid in the case \(r+s=n-1\).

\(\square \)

Proof of Proposition 3

Let \(\varvec{p}\) be a weighting vector, \((r,s)\in \mathcal {R}\), and \(k\in N{\setminus } \{n\}\). Since

$$\begin{aligned} {{\,\mathrm{disj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-k}\sum _{t=k}^{n-1}\frac{1}{\left( {\begin{array}{c}n\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N \\ |T|=t \end{array}} \mu _{\varvec{p}}^{(r,s)}(T), \end{aligned}$$

and \(\mu _{\varvec{p}}^{(r,s)}\) is given by expression (1) (or expression (2) when \(r+s=n-1\)), we distinguish the following cases:

  1. 1.

    If \(k \ge n-r\), then

    $$\begin{aligned} {{\,\mathrm{disj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-k}\sum _{t=k}^{n-1} 1 = 1. \end{aligned}$$
  2. 2.

    If \(s< k < n-r\), by Remarks 4 and 3 we have

    $$\begin{aligned} {{\,\mathrm{disj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big )&= \frac{1}{n-k} \left( \sum _{t=k}^{n-r-1}\frac{1}{\left( {\begin{array}{c}n\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N \\ |T|=t \end{array}} \sum _{i\in T} p_{i} + r \right) \\&= \frac{1}{n-k} \left( \sum _{t=k}^{n-r-1} \frac{t}{n} + r \right) \\&= \frac{1}{n-k} \left( \frac{(n-r)(n-r-1) - k(k-1)}{2n} + r \right) \\&= \frac{n(n-1)+r(r+1)-k(k-1)}{2n(n-k)}. \end{aligned}$$
  3. 3.

    If \(k\le s\), we distinguish two cases:

    1. (a)

      If \(r+s=n-1\), then

      $$\begin{aligned} {{\,\mathrm{disj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-k}\sum _{t=n-r}^{n-1} 1 = \frac{r}{n-k}. \end{aligned}$$
    2. (b)

      If \(r+s<n-1\), then

      $$\begin{aligned} {{\,\mathrm{disj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-k} \left( \sum _{t=s+1}^{n-r-1}\frac{1}{\left( {\begin{array}{c}n\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N \\ |T|=t \end{array}} \sum _{i\in T} p_{i} + r \right) . \end{aligned}$$

      Notice that the above expression coincides with that of the second item when \(k=s+1\). Therefore,

      $$\begin{aligned} {{\,\mathrm{disj}\,}}_{k}\!\big (M_{\varvec{p}}^{(r,s)}\big ) = \frac{n(n-1) + r(r+1) - s(s+1)}{2n(n-k)}. \end{aligned}$$

      Notice also that, when \(r+s=n-1\), the previous expression returns \(r/(n-k)\). So, it is also valid in the case \(r+s=n-1\).

\(\square \)

Proof of Proposition 4

Let \(\varvec{p}\) be a weighting vector, \((r,s)\in \mathcal {R}\) and \(j\in N\). Since

$$\begin{aligned} {{\,\mathrm{veto}\,}}\!\big (M_{\varvec{p}}^{(r,s)},j\big ) = 1 - \frac{1}{n-1}\sum _{t=1}^{n-1}\frac{1}{\left( {\begin{array}{c}n-1\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j\} \\ |T|=t \end{array}} \mu _{\varvec{p}}^{(r,s)}(T), \end{aligned}$$

and \(\mu _{\varvec{p}}^{(r,s)}\) is given by expression (1) (or expression (2) when \(r+s=n-1\)), we distinguish two cases:

  1. 1.

    If \(r+s=n-1\), then

    $$\begin{aligned} {{\,\mathrm{veto}\,}}\!\big (M_{\varvec{p}}^{(r,s)},j\big ) = 1 - \frac{1}{n-1}\sum _{t=s+1}^{n-1} 1 = 1 - \frac{r}{n-1} = \frac{s}{n-1}. \end{aligned}$$
  2. 2.

    If \(r+s<n-1\), then, by Remarks 5 and 3, we get

    $$\begin{aligned} {{\,\mathrm{veto}\,}}\!\big (M_{\varvec{p}}^{(r,s)},j\big )&= 1 - \frac{1}{n-1}\left( \sum _{t=s+1}^{n-r-1} \frac{1}{\left( {\begin{array}{c}n-1\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j\} \\ |T|=t \end{array}} \sum _{i\in T} p_{i} + r \right) \\&= 1 - \frac{1}{n-1}\left( \frac{1-p_{j}}{n-1} \sum _{t=s+1}^{n-r-1} t + r \right) \\&= 1 - \frac{r}{n-1} - \frac{(1-p_{j})\big ((n-r)(n-r-1)-s(s+1)\big )}{2(n-1)^2}. \end{aligned}$$

    Notice that, when \(r+s=n-1\), the previous expression returns \(s/(n-1)\). So, it is also valid in the case \(r+s=n-1\).

\(\square \)

Proof of Proposition 5

Let \(\varvec{p}\) be a weighting vector, \((r,s)\in \mathcal {R}\) and \(j\in N\). Since

$$\begin{aligned} {{\,\mathrm{favor}\,}}\!\big (M_{\varvec{p}}^{(r,s)},j\big ) = \frac{1}{n-1}\sum _{t=0}^{n-1} \frac{1}{\left( {\begin{array}{c}n-1\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j\} \\ |T|=t \end{array}} \mu _{\varvec{p}}^{(r,s)}(T\cup \{j\}) - \frac{1}{n-1}, \end{aligned}$$

and \(\mu _{\varvec{p}}^{(r,s)}\) is given by expression (1) (or expression (2) when \(r+s=n-1\)), we distinguish two cases:

  1. 1.

    If \(r+s=n-1\), then

    $$\begin{aligned} {{\,\mathrm{favor}\,}}\!\big (M_{\varvec{p}}^{(r,s)},j\big ) = \frac{1}{n-1}\sum _{t=s}^{n-1} 1 - \frac{1}{n-1} = \frac{n-s-1}{n-1}= \frac{r}{n-1}. \end{aligned}$$
  2. 2.

    If \(r+s<n-1\), then, by Remarks 5 and 3, we get

    $$\begin{aligned}&{{\,\mathrm{favor}\,}}\!\big (M_{\varvec{p}}^{(r,s)},j\big )\\&\quad = \frac{1}{n-1} \left( \sum _{t=s}^{n-r-2} \frac{1}{\left( {\begin{array}{c}n-1\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j\} \\ |T|=t \end{array}} \left( \sum _{i\in T} p_{i} + p_{j}\right) + r + 1 \right) - \frac{1}{n-1}\\&\quad = \frac{1}{n-1} \left( \frac{1-p_{j}}{n-1} \sum _{t=s}^{n-r-2} t + p_{j}(n-r-s-1) + r \right) \\&\quad = \frac{(1-p_{j})\big ((n-r-2)(n-r-1)-s(s-1) + 2r(n-1) \big )}{2(n-1)^2}\\&\qquad + \frac{p_{j}(n-s-1)}{n-1}\\&\quad = (1-p_{j}) \frac{(n-1)(n-2) + r(r+1) - s(s-1)}{2(n-1)^2} + p_{j}\left( 1 - \frac{s}{n-1}\right) . \end{aligned}$$

    Notice that, when \(r+s=n-1\), the previous expression returns \(r/(n-1)\). So, it is also valid in the case \(r+s=n-1\).

\(\square \)

Proof of Proposition 6

Let \(\varvec{p}\) be a weighting vector, \((r,s)\in \mathcal {R}\) and \(j\in N\). Since

$$\begin{aligned} \phi _{j}\big (\mu _{\varvec{p}}^{(r,s)}\big )&= \frac{1}{n}\sum _{t=0}^{n-1}\frac{1}{\left( {\begin{array}{c}n-1\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j\} \\ |T|=t \end{array}} \mu _{\varvec{p}}^{(r,s)}(T\cup \{j\})\\&\quad - \frac{1}{n}\sum _{t=0}^{n-1}\frac{1}{\left( {\begin{array}{c}n-1\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j\} \\ |T|=t \end{array}} \mu _{\varvec{p}}^{(r,s)}(T). \end{aligned}$$

and \(\mu _{\varvec{p}}^{(r,s)}\) is given by expression (1) (or expression (2) when \(r+s=n-1\)), we distinguish two cases:

  1. 1.

    If \(r+s=n-1\), then

    $$\begin{aligned} \phi _{j}\big (\mu _{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n}\sum _{t=s}^{n-1} 1 - \frac{1}{n}\sum _{t=s+1}^{n-1} 1 = \frac{1}{n}. \end{aligned}$$
  2. 2.

    If \(r+s<n-1\), then, by Remark 5, we have

    $$\begin{aligned} \phi _{j}\big (\mu _{\varvec{p}}^{(r,s)}\big )&= \frac{1}{n} \left( \sum _{t=s}^{n-r-2}\frac{1}{\left( {\begin{array}{c}n-1\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j\} \\ |T|=t \end{array}} \left( p_{j} + \sum _{i\in T} p_i\right) + r + 1 \right) \\&\quad - \frac{1}{n} \left( \sum _{t=s+1}^{n-r-1}\frac{1}{\left( {\begin{array}{c}n-1\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j\} \\ |T|=t \end{array}} \sum _{i\in T} p_i + r \right) \\&= \frac{1}{n} \left( (n-r-s-1)p_{j} + \frac{1-p_{j}}{n-1} (s-(n-r-1)) + 1 \right) \\&= \frac{1}{n}\left( \frac{s+r + n(n-r-s-1)p_j}{n-1}\right) \\&= \frac{r+s}{n-1}\,\frac{1}{n} + \frac{n-1-r-s}{n-1} p_{j}\\&= \frac{r+s}{n-1}\,\frac{1}{n} + \left( 1 - \frac{r+s}{n-1}\right) p_{j}. \end{aligned}$$

    Notice that, when \(r+s=n-1\), the previous expression returns 1 / n. So, it is also valid in the case \(r+s=n-1\).

\(\square \)

Proof of Proposition 7

Let \(\varvec{p}\) be a weighting vector, \((r,s)\in \mathcal {R}\) and \(j,k\in N\). Notice that \(I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big )\) can be written as

$$\begin{aligned} I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-1}\left( I_{\{j,k\}} - I_{\{j\}} - I_{\{k\}} + I_{\varnothing }\right) , \end{aligned}$$

where \(I_{K}\), \(K \subseteq N\), is defined by

$$\begin{aligned} I_{K} = \sum _{t=0}^{n-2}\frac{1}{\left( {\begin{array}{c}n-2\\ t\end{array}}\right) } \sum _{\begin{array}{c} T\subseteq N{\setminus } \{j,k\} \\ |T|=t \end{array}} \mu _{\varvec{p}}^{(r,s)}(T\cup K). \end{aligned}$$

Since \(\mu _{\varvec{p}}^{(r,s)}\) is given by expression (1) (or expression (2) when \(r+s=n-1\)), we distinguish two cases:

  1. 1.

    If \(r+s=n-1\), then it is easy to check that \(I_{\{j,k\}}, I_{\{j\}}, I_{\{k\}}\), and \(I_{\varnothing }\) take the following values:

    $$\begin{aligned} I_{\{j,k\}}= & {} {\left\{ \begin{array}{ll} \sum \nolimits _{t=0}^{n-2} 1 = n-1 &{} \text {if } s=0, \\ \sum \nolimits _{t=s-1}^{n-2} 1 = n - s&{} \text {otherwise}, \end{array}\right. } \\ I_{\{j\}}= & {} I_{\{k\}} = \sum \limits _{t=s}^{n-2} 1 = n - 1 - s, \\ I_{\varnothing }= & {} {\left\{ \begin{array}{ll} \sum \nolimits _{t=s+1}^{n-2} 1 = n-2-s &{} \text {if } s< n-1, \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

    We now calculate \(I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big )\) taking into account the different values of \(I_{\{j,k\}}\), \(I_{\{j\}}\), \(I_{\{k\}}\), and \(I_{\varnothing }\). We distinguish three cases:

    1. (a)

      If \(s=0\), which is equivalent to \(r=n-1\), we get

      $$\begin{aligned} I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-1} (n-1 -2(n-1)+n-2) = - \frac{1}{n-1}. \end{aligned}$$
    2. (b)

      If \(0< s < n-1\) we have

      $$\begin{aligned} I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-1} (n-s -2(n-1-s) + n-2-s) = 0. \end{aligned}$$
    3. (c)

      If \(s=n-1\), which is equivalent to \(r=0\), we obtain

      $$\begin{aligned} I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big ) = \frac{1}{n-1}. \end{aligned}$$

      Therefore,

      $$\begin{aligned} I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big ) = {\left\{ \begin{array}{ll} \frac{1}{n-1} &{} \text {if } r=0, \\ -\frac{1}{n-1} &{} \text {if } r=n-1, \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
      (4)
    4. 2.

      If \(r+s<n-1\), then, by Remark 6, we can see that \(I_{\{j,k\}}\), \(I_{\{j\}}\), \(I_{\{k\}}\), and \(I_{\varnothing }\) take the following values:

      $$\begin{aligned} I_{\{j,k\}}= & {} {\left\{ \begin{array}{ll} \sum \nolimits _{t=0}^{n-2} 1 = n-1 &{} \text {if } s=0 \text { and } r=n-2,\\ \sum \nolimits _{t=0}^{n-r-3} \left( p_{j} + p_{k} + \frac{t(1-p_{j}-p_{k})}{n-2}\right) + r + 1 &{} \text {if } s=0 \text { and } r< n-2,\\ \sum \nolimits _{t=s-1}^{n-r-3} \left( p_{j} + p_{k} + \frac{t(1-p_{j}-p_{k})}{n-2}\right) + r + 1&{} \text {otherwise}, \end{array}\right. } \\ I_{\{j\}}= & {} \sum _{t=s}^{n-r-2} \left( p_{j} + \frac{t(1-p_{j}-p_{k})}{n-2}\right) + r, \\ I_{\{k\}}= & {} \sum _{t=s}^{n-r-2} \left( p_{k} + \frac{t(1-p_{j}-p_{k})}{n-2}\right) + r, \\ I_{\varnothing }= & {} {\left\{ \begin{array}{ll} 0 &{} \text {if } r=0 \text { and } s=n-2,\\ \sum \nolimits _{t=s+1}^{n-2} \frac{t(1-p_{j}-p_{k})}{n-2} &{} \text {if } r=0 \text { and } s < n-2,\\ \sum \nolimits _{t=s+1}^{n-r-1} \frac{t(1-p_{j}-p_{k})}{n-2} + r - 1&{} \text {otherwise}, \end{array}\right. } \end{aligned}$$

      We now calculate \(I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big )\) taking into account the different values of \(I_{\{j,k\}}\), \(I_{\{j\}}\), \(I_{\{k\}}\), and \(I_{\varnothing }\). For instance, when \(s=0\) and \(r=n-2\) we get

      $$\begin{aligned}&I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big )\\&\quad = \frac{1}{n-1}\left( n-1 - p_{j} - (n-2) - p_{k} - (n-2) + \frac{1-p_{j}-p_{k}}{n-2} + n-3\right) \\&\quad = \frac{1}{n-1}\frac{1-(n-1)(p_{j}+p_{k})}{n-2} = -\frac{1}{n-2}\left( p_{j}+p_{k} -\frac{1}{n-1}\right) . \end{aligned}$$

      Once all the cases have been analyzed (to avoid tedious calculations, the remaining cases are left to the reader), we have

      $$\begin{aligned} I_{jk}\big (\mu _{\varvec{p}}^{(r,s)}\big ) = {\left\{ \begin{array}{ll} \frac{1}{n-2}\left( p_{j}+p_{k} -\frac{1}{n-1}\right) &{} \text {if } r = 0 \text { and } 0< s< n-1,\\ -\frac{1}{n-2}\left( p_{j}+p_{k} -\frac{1}{n-1}\right) &{} \text {if } s = 0 \text { and } 0< r < n-1,\\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
      (5)

Expressions (4) and (5) together establish the truth of Proposition 7. \(\square \)

Proof of Proposition 8

Let \(\varvec{p}\) be a weighting vector and \((r,s)\in \mathcal {R}\). Given \(A\subseteq N\), since

$$\begin{aligned} m^{\mu _{\varvec{p}}^{(r,s)}}(A) = \sum _{t=1}^{|A|} (-1)^{|A|-t} \sum _{\begin{array}{c} T\subseteq A \\ |T|=t \end{array}} \mu _{\varvec{p}}^{(r,s)}(T), \end{aligned}$$

and \(\mu _{\varvec{p}}^{(r,s)}\) is given by expression (1) (or expression (2) when \(r+s=n-1\)), we distinguish the following cases:

  1. 1.

    If \(|A| \le s\), then

    $$\begin{aligned} m^{\mu _{\varvec{p}}^{(r,s)}}(A) = 0. \end{aligned}$$
  2. 2.

    If \(s< |A| < n-r\), by Remark 4 and the third item of Remark 2, we get

    $$\begin{aligned}&m^{\mu _{\varvec{p}}^{(r,s)}}(A) \\&\quad = \sum _{t=s+1}^{|A|} (-1)^{|A|-t} \sum _{\begin{array}{c} T\subseteq A \\ |T|=t \end{array}} \sum _{i\in T} p_{i} = \sum _{t=s+1}^{|A|} (-1)^{|A|-t} \left( {\begin{array}{c}|A|-1\\ t-1\end{array}}\right) \sum _{i\in A} p_{i}\\&\quad =\! \left( \sum _{i\in A} p_{i} \!\!\right) \!\sum _{t=s+1}^{|A|} \!(-1)^{|A|-t} \left( {\begin{array}{c}|A|-1\\ |A|-t\end{array}}\right) = \left( \sum _{i\in A} p_{i} \!\right) \!\sum _{j=0}^{|A|-s-1} \!(-1)^{j} \left( {\begin{array}{c}|A|-1\\ j\end{array}}\right) \\&\quad =\! \left( \sum _{i\in A} p_{i}\!\! \right) \! (-1)^{|A|-s-1} \!\left( {\begin{array}{c}|A|-2\\ |A|-s-1\end{array}}\right) = (-1)^{|A|-s-1} \!\left( {\begin{array}{c}|A|-2\\ s-1\end{array}}\right) \!\! \left( \sum _{i\in A} p_{i} \!\!\right) \!\!. \end{aligned}$$

    Notice that when \(s=0\) we have \(m^{\mu _{\varvec{p}}^{(r,s)}}(A) = 0\).

  3. 3

    If \(|A| \ge n-r\), we distinguish two cases:

    1. (a)

      If \(r+s=n-1\), then, by the third item of Remark 2 we have

      $$\begin{aligned} m^{\mu _{\varvec{p}}^{(r,s)}}(A)&= \sum _{t=s+1}^{|A|} (-1)^{|A|-t} \left( {\begin{array}{c}|A|\\ t\end{array}}\right) = \sum _{t=s+1}^{|A|} (-1)^{|A|-t} \left( {\begin{array}{c}|A|\\ |A|-t\end{array}}\right) \\&= \sum _{j=0}^{|A|-s-1} (-1)^{j} \left( {\begin{array}{c}|A|\\ j\end{array}}\right) = (-1)^{|A|-s-1} \left( {\begin{array}{c}|A|-1\\ |A|-s-1\end{array}}\right) \\&= (-1)^{|A|-s-1} \left( {\begin{array}{c}|A|-1\\ s\end{array}}\right) . \end{aligned}$$
    2. (b)

      If \(r+s<n-1\), by Remark 4 and the third item of Remark 2, we have

      $$\begin{aligned}&m^{\mu _{\varvec{p}}^{(r,s)}}(A)\\&\quad = \sum _{t=s+1}^{n-r-1} (-1)^{|A|-t} \left( {\begin{array}{c}|A|-1\\ t-1\end{array}}\right) \sum _{i\in A} p_{i} + \sum _{t=n-r}^{|A|} (-1)^{|A|-t} \left( {\begin{array}{c}|A|\\ t\end{array}}\right) \\&\quad = \left( \sum _{i\in A} p_{i} \right) \sum _{t=s+1}^{n-r-1} (-1)^{|A|-t} \left( {\begin{array}{c}|A|-1\\ |A|-t\end{array}}\right) + \sum _{t=n-r}^{|A|} (-1)^{|A|-t} \left( {\begin{array}{c}|A|\\ |A|-t\end{array}}\right) \\&\quad = \left( \sum _{i\in A} p_{i} \right) \sum _{j=|A|-n+r+1}^{|A|-s-1} (-1)^{j} \left( {\begin{array}{c}|A|-1\\ j\end{array}}\right) + \sum _{j=0}^{|A|-n+r} (-1)^{j} \left( {\begin{array}{c}|A|\\ j\end{array}}\right) \\&\quad = \left( \sum _{i\in A} p_{i} \right) \left( (-1)^{|A|-s-1} \left( {\begin{array}{c}|A|-2\\ |A|-s-1\end{array}}\right) \right. \\&\qquad \left. - (-1)^{|A|-n+r} \left( {\begin{array}{c}|A|-2\\ |A|-n+r\end{array}}\right) \right) + (-1)^{|A|-n+r} \left( {\begin{array}{c}|A|-1\\ |A|-n+r\end{array}}\right) \\&\quad = \left( \sum _{i\in A} p_{i} \right) \left( (-1)^{|A|-s-1} \left( {\begin{array}{c}|A|-2\\ s-1\end{array}}\right) - (-1)^{|A|-n+r} \left( {\begin{array}{c}|A|-2\\ n-r-2\end{array}}\right) \right) \\&\qquad + (-1)^{|A|-n+r} \left( {\begin{array}{c}|A|-1\\ n-r-1\end{array}}\right) . \end{aligned}$$

      Notice also that, when \(r+s=n-1\), the previous expression returns \((-1)^{|A|-s-1} \left( {\begin{array}{c}|A|-1\\ s\end{array}}\right) \). So, it is also valid in the case \(r+s=n-1\).

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Llamazares, B. An Analysis of Winsorized Weighted Means. Group Decis Negot 28, 907–933 (2019). https://doi.org/10.1007/s10726-019-09623-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10726-019-09623-8

Keywords

Navigation