Skip to main content
Log in

Inequality measurement with coarse data

  • Original Paper
  • Published:
Social Choice and Welfare Aims and scope Submit manuscript

Abstract

Measuring inequality is a challenging task, particularly when data is collected in a coarse manner. This paper proposes a new approach to measuring inequality indices that considers all possible income values and avoids arbitrary statistical assumptions. Specifically, the paper suggests that two sets of income distributions should be considered when measuring inequality, one including the highest income per individual and the other including the lowest possible income per individual. These distributions are subjected to inequality index measures, and a weighted average of these two indices is taken to obtain the final inequality index. This approach provides more accurate measures of inequality while avoiding arbitrary statistical assumptions. The paper focuses on two special cases of social welfare functions, the Atkinson index and the Gini index, which are widely used in the literature on inequality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. We refer to Cowell (2011) for a survey of various inequality indices.

  2. Income bands refer to grouping individuals or households into broad categories based on their reported income levels. I highly appreciate that one referee suggests this example.

  3. Consult the website of United States Census Bureau for further details.

  4. I highly appreciate that an editor suggests this example.

  5. A \(n\times n\) matrix M with nonnegative entries is called a bistochastic matrix order n if each of its rows and columns sums to unity.

  6. For every pair of deterministic allocation profiles fg, the distance between f and g can be induced by a natural topology, written as d(fg), on \({\mathbb {R}}^n\). Therefore, the set of allocation profiles \({\mathcal {F}}\) can be equipped with Hausdorff distance in the following way: for \(F,G\in {\mathcal {F}}\),

    $$\begin{aligned} \text {dist}(F,G)=\max \big \{\max _{f\in F}\min _{g\in G} d(f,g),\max _{g\in G}\min _{f\in F}d(f,g)\big \}. \end{aligned}$$
  7. The classic definition of Lorenz domination, such as Atkinson (1970) and Dasgupta et al. (1973), assumed that the compared profiles have the same mean, which does not fit in our setting. Therefore, our definition is an extension of their concept, which has referred to as generalized Lorenz dominance by Shorrocks (1980).

  8. We refer to chapter 2 of Moulin (1991) for a discussion of two classic indices developed on AKS approach.

  9. We refer to Blackorby and Donaldson (1980) for detailed discussion about relative index.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangyu Qu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix: Proofs

A Proof of Section 2

1.1 A.1 Proof of Proposition 1

To prove Proposition 1, suppose that function \(I:{\mathcal {F}}\rightarrow {\mathbb {R}}\) is defined as in eq. (1) and the associated SWF W is regular.

Proof of (i)

By definition, for \(F\in {\mathcal {F}}\),

$$\begin{aligned}I(F)=\alpha _F I(\overline{F})+(1-\alpha _F)I(\underline{F}).\end{aligned}$$

Since \(\alpha _F\in [0,1]\), it is immediate to see that \(\min \{I(\overline{F}),I(\underline{F})\}\le I(F)\le \max \{I(\overline{F}),I(\underline{F})\}\).

Proof of (ii)

For \(F\in {\mathcal {F}}\), \(W(F)=W(\xi (F)\cdot \mathbbm {1})\). By continuity of W, \(\xi \) is also a continuous function. By monotonicity of W, for any \(F,G\in {\mathcal {F}}\),

$$\begin{aligned}W(F)\ge W(G)\Longleftrightarrow \xi (F)\ge \xi (G).\end{aligned}$$

Since W is Schur concave with respect to deterministic profiles, for any bistochastic matrix M of order n,

$$\begin{aligned}W(fM)\ge W(f).\end{aligned}$$

For any \(c>0\), \(W(c\cdot \mathbbm {1})=c\). So, we have

$$\begin{aligned}W(fM)=\xi (fM)\ge W(f)=\xi (f).\end{aligned}$$

Note that \(\mu (fM)=\mu (f)\). Therefore,

$$\begin{aligned} I(fM)&=1-\frac{\xi (fM)}{\mu (fM)}\\&=1-\frac{\xi (fM)}{\mu (f)}\\&\le 1-\frac{\xi (f)}{\mu (f)}\\&=I(f) \end{aligned}$$

Proof of (iii)

Consider bistochastic matrix \({\hat{M}}=(m_{ij})\) with \(m_{ij}=1/n\) for all \(i,j\in {\mathcal {N}}\). Then for any \(f\in {\mathcal {F}}\), Schur concavity implies that

$$\begin{aligned}W(f{\hat{M}})=\mu (f)\ge W(f).\end{aligned}$$

Therefore, for \(F\in {\mathcal {F}}\), we have

$$\begin{aligned}0\le \frac{\xi (\overline{F})}{\mu (\overline{F})},\frac{\xi (\underline{F})}{\mu (\underline{F})}\le 1.\end{aligned}$$

Therefore, according to the definition of I,

$$\begin{aligned}0\le I(\overline{F}),I(\underline{F})\le 1.\end{aligned}$$

So, for any \(\alpha _F\in [0,1]\),

$$\begin{aligned}I(F)=\alpha _F I(\overline{F})+(1-\alpha _F)I(\underline{F})\in [0,1].\end{aligned}$$

Furthermore, if \(I(F)=0\), then \(I(\overline{F})=I(\underline{F})=0\). This means that \(\xi (F)=\mu (F)\), which leads to \(F=c\cdot \mathbbm {1}\) for some \(c>0\). Conversely, if \(F=c\cdot \mathbbm {1}\), we have \(\xi (F)=\mu (F)\), which leads to \(I(F)=0\).

1.2 A.2 Proof of Proposition 2

Since the proof of necessity part is straightforward, we only prove sufficiency part. Suppose \(F\succsim _L G\). By definition, for all \(f\in F\) and \(g\in G\), \(f\succsim _L g\). Lorenz dominance requires that for all \(k=1,2,\ldots ,n\)

$$\begin{aligned}\frac{1}{n\mu (f)}\sum ^k_{i=1}{\tilde{f}}_i\ge \frac{1}{n\mu (g)}\sum ^k_{i=1}{\tilde{g}}_i.\end{aligned}$$

Since \(\mu (\underline{F})=\min _{f\in F}\mu (f)\ge \max _{g\in G}\mu (g)=\mu (\overline{G})\), we have

$$\begin{aligned}\frac{\mu (\underline{F})}{n\mu (f)}\sum ^k_{i=1}{\tilde{f}}_i\ge \frac{\mu (\overline{G})}{n\mu (g)}\sum ^k_{i=1}{\tilde{g}}_i,\end{aligned}$$

which implies

$$\begin{aligned}\frac{1}{n}\sum ^k_{i=1}{\tilde{f}}_i\ge \frac{1}{n}\sum ^k_{i=1}{\tilde{g}}_i.\end{aligned}$$

Now, according to Marshall et al. (1979) (64 pp.), there must exist a bistochastic matrix M such that \({\tilde{f}}\ge {\tilde{g}}M\). Then, monotonicity of W implies \(W({\tilde{f}})\ge W({\tilde{g}}M)\). Furthermore, Schur-concavity implies that \(W({\tilde{g}}M)\ge W({\tilde{g}})\). Notice that Schur-concavity implies symmetry, hence, \(W({\tilde{f}})=W(f)\) and \(W({\tilde{g}})=W(g)\). As a result, we have \(W(f)\ge W(g)\). Since this inequality holds for any \(f\in F\) and \(g\in G\),

$$\begin{aligned}\min _{f\in F}W(f)\ge \max _{g\in G}W(g).\end{aligned}$$

Monotonicity requires that \(W(F)\ge \min _{f\in F}W(f)\) and \(\max _{g\in G}W(g)\ge W(G)\), which implies \(W(F)\ge W(G)\).

1.3 A.3 Proof of Proposition 3

We first show the necessity part: suppose that I is a relative index. By definition, we have

$$\begin{aligned} \xi (F)&=\lambda _F\xi (\overline{F})+(1-\lambda _F)\xi (\underline{F})\\&=\lambda _F\mu (\overline{F})(1-I(\overline{F}))+(1-\lambda _F)\mu (\underline{F})(1-I(\underline{F})) \end{aligned}$$

Since index I is homogeneous of degree zero, linear homogeneity of mean \(\mu \) implies linear homogeneity of \(\xi \).

$$\begin{aligned} W(F)&=W(\xi (F)\cdot \mathbbm {1})\\&=\Phi (\xi (F)), \end{aligned}$$

where \(\Phi \) is increasing in its argument. Hence, W is homothetic.

Now we show the sufficiency part: suppose that W is homothetic. Then, there exist an increasing function \(\Phi \) and a linearly homogeneous function \({\hat{W}}\) such that for \(F\in {\mathcal {F}}\),

$$\begin{aligned}W(F)=\Phi ({\hat{W}}(F)).\end{aligned}$$

Since \({\hat{W}}\) is linearly homogeneous, we have

$$\begin{aligned}\xi (F)=\frac{{\hat{W}}(F)}{{\hat{W}}(\mathbbm {1})}.\end{aligned}$$

Therefore, \(\xi \) is also linearly homogeneous. Since \(\mu \) is also linearly homogeneous, robust index I defined as above becomes homogeneous of degree zero. Thus, I is a relative index.

B Proof of Section 3

1.1 B.1 Proof of Proposition 4

The necessity part is straightforward. We only prove the sufficiency part. Suppose \(\succsim \) satisfies A1-4 and A6.

First, restricted \(\succsim \) to set of deterministic profiles \( X^n\). Since X is connected and separable, and \(\succsim \) satisfies conditions of Debreu (1960) separable Theorem, there exists a continuous function \(u_i: X\rightarrow {\mathbb {R}}\) such that the sum of \(u_i\) represents \(\succsim \). Symmetry further requires that each \(u_i\) has to be identical. Therefore, there is a continuous function \(u: X\rightarrow {\mathbb {R}}\) such that \(f\succsim g\Leftrightarrow \sum _{i=1}^nu(f_i)\ge \sum _{i=1}^nu(g_i)\). Furthermore, A4 unanimity implies that u is also increasing in X.

Now, we extend u from domain X to \({\mathcal {X}}\) in the following way. For \(Y\in {\mathcal {X}}\) and \(c\in X\), we define \(u(Y)=u(c)\) if \(F\sim f\) whenever \(F_i= Y\) and \(f_i=c\) for all i. Since Y is compact, there exist \(a,b\in X\) such that \(a\ge Y\ge b\). Unanimity implies that equally distributed profiles must satisfy the preferences: \((a,\ldots , a)\succsim (Y,\ldots ,Y)\succsim (b,\ldots ,b)\). Therefore, by continuity, there exists a unique c such that \((c,\ldots ,c)\sim (Y,\ldots ,Y)\). Hence, u on \({\mathcal {X}}\) is well-defined.

Pick any \(F=(Y_1,\ldots ,Y_n)\in {\mathcal {F}}\). Let \(c_1,\ldots ,c_n\) in X be such that \(u(Y_i)=u(c_i)\) for all i. To prove the additive separability, it suffices to show that \(F\sim (c_1,\ldots ,c_n)\). We prove it by induction.

Claim 1

For any \(i\in {\mathcal {N}}\), \((c_1,\ldots ,c_{i-1},Y_i,c_{i+1},\ldots ,c_n)\sim (c_1,c_2,\ldots ,c_n)\).

Proof of Claim

By A3 symmetry, it suffices to prove that \((Y_1,c_2,\ldots ,c_n)\sim (c_1,\ldots ,c_n)\). Furthermore, by separability, we only need to show the case where \((Y_1,c_1,\ldots ,c_1)\sim (c_1,c_1,\ldots ,c_1)\). Suppose such indifference relation does not hold. Assume first that

$$\begin{aligned}(Y_1,c_1,\ldots ,c_1)\succ (c_1,\ldots ,c_1).\end{aligned}$$

Then, separability implies that \((Y_1,\ldots ,Y_1)\succ (c_1,Y_1,\ldots ,Y_1)\). According to definition, \((c_1,\ldots ,c_1)\sim (Y_1,\ldots ,Y_1)\), which implies that

$$\begin{aligned}(Y_1,c_1,\ldots ,c_1)\succ (c_1,Y_1,\ldots ,Y_1).\end{aligned}$$

By symmetry, it is equivalent to \((c_1,Y_1,c_1,\ldots ,c_1)\succ (c_1,Y_1,\ldots ,Y_1)\). Applying separability again, we have

$$\begin{aligned}(Y_1,Y_1,c_1,\ldots ,c_1)\succ (Y_1,\ldots ,Y_1,Y_1)\succ (c_1,Y_1,\ldots ,Y_1).\end{aligned}$$

Similarly, we can use separability and symmetry again to get

$$\begin{aligned}(Y_1,Y_1,Y_1,c_1\ldots ,c_1)\succ (Y_1,\ldots ,Y_1,Y_1)\succ (c_1,Y_1,\ldots ,Y_1).\end{aligned}$$

Repeat this process, we finally have \((Y_1,\ldots ,Y_1,c_1)\succ (Y_1,\ldots ,Y_1,Y_1)\), which contradicts to our assumption.

Now, if we assume the other possibility that \((c_1,\ldots ,c_1)\succ (Y_1,c_1,\ldots ,c_1)\), it is similar to show the contradiction.\(\square \)

Claim 2

If \((Y_1,\ldots ,Y_t,c_{t+1},\ldots ,c_n)\sim (c_1,\ldots ,c_n)\), then \((Y_1,\ldots ,Y_{t+1},c_{t+2},\ldots ,c_n)\sim (c_1,\ldots ,c_n)\).

Proof of Claim

By separability, it suffices to prove that if \((Y_1,\ldots ,Y_t,c,\ldots ,c)\sim (c_1,\ldots ,c_t,c,\ldots ,c)\) for some t, then it holds for \(t+1\). Since \((Y_1,\ldots ,Y_t,c,\ldots ,c)\sim (c_1,\ldots ,c_t,c,\ldots ,c)\), separability implies that

$$\begin{aligned}(Y_1,\ldots ,Y_{t+1},c,\ldots ,c)\sim (c_1,\ldots ,c_t,Y_{t+1},c,\ldots ,c).\end{aligned}$$

By Claim 1, \((c_1,\ldots ,c_t,Y_{t+1},c,\ldots ,c)\sim (c_1,\ldots ,c_{t+1},c,\ldots ,c)\). Hence, this claim holds.\(\square \)

By Claim 1 and 2, for any \(F\in {\mathcal {F}}\), we define \(W:{\mathcal {F}}\rightarrow {\mathbb {R}}\) by \(W(F)=\sum _{i=1}^nu(F_i)\), which clearly represents \(\succsim \).

1.2 B.2 Proof of Theorem 1

Sufficiency Part:

Suppose that \(\succsim \) on \({\mathcal {F}}\) satisfies A1-9. Our strategy to prove that robust Atkinson SWF represents \(\succsim \) is following: First, we consider only the profiles that every individual have identical and binary values. We show that there exists unique \(\alpha \in (0,1)\) such that for any \(x>y\) in X, \(u(\{x,y\})=\alpha u(x)+(1-\alpha )u(y)\). Second, we consider the profiles that every individual have identical, but arbitrarily many outcomes. We show that for any \(Y\in {\mathcal {X}}\), \(\displaystyle u(Y)=\alpha u(\max _{x\in Y} x)+(1-\alpha )u(\min _{y\in Y} y)\). Third, we show that A8 scale invariance and A9 Pigou-Dalton principle imply that u on X has either power function or log function form. Finally, combined with Proposition 4, A5 dominance implies that for any \(F\in {\mathcal {F}}\),

$$\begin{aligned}W(F)=\alpha \sum _i u(\overline{F}_i)+(1-\alpha )\sum _i u(\underline{F}_i)\end{aligned}$$

represents \(\succsim \).

To start, notice that proposition 4 implies the existence of u on \({\mathcal {X}}\). Define \(\succsim ^*\) on \(X^2\) by

$$\begin{aligned}(a,b)\succsim ^*(c,d)\Leftrightarrow u(\{a,b\})\ge u(\{c,d\}).\end{aligned}$$

Lemma B1

For all \(a,b,c\in X\), if \(a\ge b\), then \((a,c)\succsim ^*(b,c)\).

Proof

Take \(a,b,c\in X\) with \(a\ge b\). Let \(Y=\{a,c\}\) and \(Z=\{b,c\}\). So profile \((Y,\ldots ,Y)\) dominates profile \((Z,\ldots , Z)\). By A5, \((Y,\ldots , Y)\succsim (Z,\ldots , Z)\). Proposition 4 implies \(u(Y)\ge u(Z)\). Hence, by definition, \((a,b)\succsim ^*(b,c)\). \(\square \)

Let \(0<\ell \le \ell '<\infty \). Consider \(\succsim ^*\) restricted to \([0,\ell ]\times [\ell ',+\infty )\). We show that this restricted preference has an additive conjoint structure, hence has a additively separable utility representation.

Lemma B2

\(\succsim ^*\) restricted to \([0,\ell ]\times [\ell ',+\infty )\) satisfies the following conditions:

\(A1^*\):

(weak order): \(\succsim ^*\) is complete and transitive.

\(A2^*\):

(Independence): \((x,b')\succsim ^*(y,b')\) implies \((x,x')\succsim ^*(y,x')\); also, \((b,x')\succsim ^*(b,y')\) implies \((x,x')\succsim ^*(x,y')\).

\(A3^*\):

(Thomsen): \((x,z')\sim ^*(z,y')\) and \((z,x')\sim ^*(y,z')\) imply \((x,x')\sim ^*(y,y')\).

\(A4^*\):

(Essential): There exist \(b,c\in [0,\ell ]\) and \(a\in [\ell ',+\infty )\) such that \((b,a)\not \sim ^*(c,a)\), and \(b'\in [0,\ell ]\) and \(a',c'\in [\ell ',+\infty )\) such that \((b',a')\not \sim ^*(b',c')\).

\(A5^*\):

(Solvability): If \((x,x')\succsim ^*(y,y')\succsim ^*(z,x')\), then there exist \(a\in [0,\ell ]\) such that \((a,x')\sim (y,y')\); if \((x,x')\succsim ^*(y,y')\succsim ^*(x,z')\), then there exists \(a'\in [\ell ',+\infty )\) such that \((x,a')\sim ^*(y,y')\).

\(A6^*\):

(Archimedean): For all \(x,x'\in [0,\ell ]\) and \(y,z\in [\ell ',+\infty )\), if \((x,y)\succsim ^*(x',z)\), then there exists ab in \([0,\ell ]\) satisfying \((x,y)\succsim ^*(a,y)\sim ^*(b,z)\succ ^*(b,y)\succsim ^*(x',z)\). A similar statement holds with the roles of \([0,\ell ]\) and \([\ell ',+\infty )\) reversed.

Proof

By definition, \(\succsim ^*\) is a weak order. It is easy to show all the axioms except Thomsen condition. Below, we show Thomsen condition.

Suppose that \((x,z')\sim ^*(z,y')\) and \((z,x')\sim ^*(y,z')\). By definition, this is equivalent to \(u(x,z')=u(z,y')\) and \(u(z,x')=u(y,z')\). To show that \(u(x,x')=u(y,y')\), there are three cases to consider: \(z'\ge \{x',y'\}\), \(y'\ge \{x',z'\}\) and \(x'\ge \{y',z'\}\).

Suppose first that \(z'\ge \{x',y'\}\). Since \(\{x',y'\}\ge \{x,y,z\}\), Lemma B1 implies that \((x',z')\succsim ^*(x,z')\) and \((y',z')\succsim ^*(y,z')\). Thus, A7 commutativity implies

$$\begin{aligned}u(e(x,x'),e(z',z'))= u(e(x,z'), e(x',z')).\end{aligned}$$

Note that \((x,z')\sim ^*(z,y')\) implies \(e(x,z')=e(z,y')\). Therefore,

$$\begin{aligned}u(e(x,z'), e(x',z'))=u(e(z,y'), e(x',z')).\end{aligned}$$

Applying commutativity again, we have

$$\begin{aligned}u(e(z,y'), e(x',z'))=u(e(z,x'),e(y',z')).\end{aligned}$$

Note again that \((z,x')\sim ^*(y,z')\) implies \(e(z,x')=e(y,z')\). Therefore,

$$\begin{aligned}u(e(z,x'), e(y',z'))=u(e(y,z'),e(y',z')).\end{aligned}$$

Commutativity implies that

$$\begin{aligned}u(e(y,z'),e(y',z'))=u(e(y,y'),e(z',z')).\end{aligned}$$

Therefore, we have \(u(e(x,x'),e(z',z'))=u(e(y,y'),e(z',z'))\), which implies, by Lemma B1, \(e(x,x')=e(y,y')\). That is, \(u(x,x')=u(y,y')\).

For the other two cases, similar arguments will lead to the same results. \(\square \)

Lemma B3

There exist two real-valued functions \(\phi \) and \(\varphi \) on X such that for all \(x,x',y,y'\in X\) with \(x\le y\) and \(x'\le y'\),

$$\begin{aligned}(x,y)\succsim ^*(x',y')\Longleftrightarrow \phi (x)+\varphi (y)\ge \phi (x')+\varphi (y').\end{aligned}$$

Furthermore, if there are \(\phi ',\varphi '\)represents \(\succsim ^*\) instead of \(\phi ,\varphi \), respectively, then there exist \(\gamma >0\) and \(\beta _1,\beta _2\) such that \(\phi '=\gamma \phi +\beta _1\) and \(\varphi '=\gamma \varphi +\beta _2\).

Proof

Let \(a>0\). Lemma B2 implies that \(\succsim ^*\) restricted to \([0,a]\times [a,+\infty )\) is an additive conjoint structure. Thus, by Theorem 2 of Chapter 6 in Krantz et al. (2006), there exist two function \(\phi _a\) on [0, a] and \(\varphi _a\) on \([a,+\infty )\) represent \(\succsim ^*\subset [0,a]\times [a,+\infty )\), i.e. for all \(x,x'\in [0,a]\) and \(y,y'\in [a,+\infty )\)

$$\begin{aligned}(x,y)\succsim ^*(x',y')\Longleftrightarrow \phi _a(x)+\varphi _a(y)\ge \phi _a(x')+\varphi _a(y').\end{aligned}$$

By uniqueness of representation, we can normalize \(\phi _a\) and \(\varphi _a\) such that

$$\begin{aligned}u(a)=\phi _a(a)+\varphi _a(a).\end{aligned}$$

If \(b>a\), since \(\succsim ^*\subset [0,b]\times [b,+\infty )\) is also an additive conjoint structure, then there exist functions \(\phi _b\) on [0, b] and \(\varphi _b\) on \([b,+\infty )\) that represent such preferences. Due to the uniqueness of representation, we can normalized \(\phi _b\) in the way such that \(\phi _b(a)=\phi _a(a)\). By similar method, if \(c\in (0,a)\), since \(\succsim ^*\subset [0,c]\times [c,+\infty )\) is also an additive preference structure, then there exist functions \(\phi _c\) on [0, c] and \(\varphi _c\) on \([c,+\infty )\) that represent such preferences. Again, \(\varphi _c\) is normalized in the way that \(\varphi _c(a)=\varphi _a(a)\).

Now, define \(\phi :X\rightarrow {\mathbb {R}}\) by

$$\begin{aligned}\phi (x) = {\left\{ \begin{array}{ll} \phi _x(x) &{} \quad \text {if } x>0;\\ \phi _a(0) &{} \quad \text {if } x=0. \end{array}\right. } \end{aligned}$$

Similarly, define \(\varphi :X\rightarrow {\mathbb {R}}\) by

$$\begin{aligned}\varphi (y) = {\left\{ \begin{array}{ll} \varphi _y(y) &{} \quad \text {if } y>0;\\ u(0)-\phi _a(0) &{} \quad \text {if } y=0. \end{array}\right. } \end{aligned}$$

Therefore, \(\phi \) and \(\varphi \) on X are uniquely specified. According to continuity and unanimity, \(\varphi (0)<\varphi (y)\) for all \(y>0\). Take arbitrary \(0<y\le x\). There always exists ab such that \(x<a\) and \(0<b<y\). Therefore,

$$\begin{aligned} x\ge y&\Leftrightarrow (x,a)\succsim ^*(y,a) \\&\Leftrightarrow \phi _a(x)+\varphi _a(a)\ge \phi _a(y)+\phi _a(a)\\&\Leftrightarrow \phi _x(x)\ge \phi _y(y)\\&\Leftrightarrow \phi (x)\ge \phi (y) \end{aligned}$$

Similarly, we have

$$\begin{aligned} x\ge y&\Leftrightarrow (b,x)\succsim ^*(b,y) \\&\Leftrightarrow \phi _b(b)+\varphi _b(x)\ge \phi _b(b)+\phi _b(y)\\&\Leftrightarrow \varphi _b(x)\ge \varphi _b(y)\\&\Leftrightarrow \varphi (x)\ge \varphi (y) \end{aligned}$$

Therefore \(x\ge y\Leftrightarrow \phi (x)+\varphi (x)\ge \phi (y)+\varphi (y)\). We show that \(\phi \) and \(\varphi \) have the properties above. Let \(x\le y\) and \(x'\le y'\). Suppose that \((x,y)\succsim ^*(x',y')\). There are two cases: either \(x\ge y'\) or \(x<y'\). First, assume that \(x\ge y'\). Then, continuity and unanimity imply that there exists a and b such that \((x,y)\sim ^*(a,a)\) and \((b,b)\sim ^*(x',y')\).

$$\begin{aligned} (x,y)\sim ^*(a,a)&\Leftrightarrow \phi (x)+\varphi (y)= \phi (a)+\varphi (a),\\ (x',y')\sim ^*(b,b)&\Leftrightarrow \phi (x')+\varphi (y')= \phi (b)+\varphi (b). \end{aligned}$$

Note that \(a\ge b\), which is \(\phi (a)+\varphi (a)\ge \phi (b)+\varphi (b)\). Therefore,

$$\begin{aligned}(x,y)\succsim ^*(x',y')\Leftrightarrow \phi (x)+\varphi (y)\ge \phi (x')+\varphi (y').\end{aligned}$$

The uniqueness of representation follows immediately from the definition of \(\phi \) and \(\varphi \). \(\square \)

Lemma B4

There exists \(0\le \alpha \le 1\) such that for all \(x\ge y\),

$$\begin{aligned}u(\{x,y\})=\alpha u(x)+(1-\alpha )u(y).\end{aligned}$$

Proof

It suffices to show that there are constants \(\beta >0\) such that \(\varphi (x)=\beta \phi (x)\). If \(a>0\), define

$$\begin{aligned} \phi _{1a}(x)&=\phi (e(a,x))\quad \text { and }\quad \varphi _{1a}(x)=\varphi (e(a,x)),\text { for }x\ge a;\\ \phi _{2a}(x)&=\phi (e(x,a))\quad \text { and }\quad \varphi _{2a}(x)=\varphi (e(x,a)),\text { for }x\le a. \end{aligned}$$

For \(\{x,y,z,w\}\ge a\), if \(x\le y\) and \(z\le w\), then \((a,y)\succsim ^*(a,x)\) and \((a,w)\succsim ^*(a,z)\). Therefore,

$$\begin{aligned} (z,w)\succsim ^*(x,y)&\Leftrightarrow \phi (z)+\varphi (w)\ge \phi (x)+\varphi (y)\\&\Leftrightarrow e(z,w)\ge e(x,y)\\&\Leftrightarrow (a,e(z,w))\succsim ^*(a,e(x,y)) \end{aligned}$$

The last equivalence is implied by Lemma B1. Commutativity implies that \((a,e(z,w))\sim ^*(e(a,z),e(a,w))\) and \((a,e(x,y))\sim ^*(e(a,x),e(a,y))\). Therefore,

$$\begin{aligned} (z,w)\succsim ^*(x,y)&\Leftrightarrow \phi (z)+\varphi (w)\ge \phi (x)+\varphi (y)\\&\Leftrightarrow (e(a,z),e(a,w))\succsim ^*(e(a,x),e(a,y))\\&\Leftrightarrow \phi (e(a,z))+\varphi (e(a,w))\ge \phi (e(a,x))+\varphi (e(a,y))\\&\Leftrightarrow \phi _{1a}(z)+\varphi _{1a}(w)\ge \phi _{1a}(x)+\varphi _{1a}(y). \end{aligned}$$

If \(x\le y\le a\) and \(z\le w\le a\), then similarly we have

$$\begin{aligned}(z,w)\succsim ^*(x,y)\Leftrightarrow \phi _{2a}(z)+\varphi _{2a}(w)\ge \phi _{2a}(x)+\varphi _{2a}(y).\end{aligned}$$

Thus \(\phi _{1a}\) and \(\varphi _{2a}\) represent \(\succsim ^*\) on \([0,a]\times [a,+\infty )\). By uniqueness of representation, there are \(k_1,k_2>0\) and \(k_{11}\) and \(k_{12}\) such that for \(a,b>0\),

$$\begin{aligned}\phi _{1a}(x)=k_1(a)\phi (x)+k_{11}(a)\quad \text {and}\quad \varphi _{2b}(y)=k_2(b)\varphi (y)+k_{12}(b).\end{aligned}$$

Notice that if \(\varphi \) is constant, then it is trivially that \(u(\{x,y\})=\phi (x)=u(x)\), which is \(\alpha =1\). Similarly, if \(\phi \) is constant, then \(u(\{x,y\})=u(y)\), which is \(\alpha =0\). Now, suppose both \(\phi \) and \(\varphi \) are non-constant. Take \(w\ge \{y,z\}\ge x\). Lemma B1 implies that \((z,w)\succsim ^*(x,y)\) and \((y,w)\succsim ^*(z,x)\). According to commutativity,

$$\begin{aligned}&(e(x,y),e(z,w))\sim ^*(e(x,z),e(y,w))\\&\Leftrightarrow \phi (e(x,y))+\varphi (e(z,w))=\phi (e(x,z))+\varphi (e(y,w))\\&\Leftrightarrow k_1(x)\phi (y)+k_{11}(x)+k_2(w)\varphi (z)+k_{12}(w)=k_1(x)\phi (z)+k_{11}(x)+k_2(w)\varphi (y)+k_{12}(w)\\&\Leftrightarrow k_1(x)(\phi (y)-\phi (z))=k_2(w)(\varphi (y)-\varphi (z)). \end{aligned}$$

Since the above equations are satisfied for all xyzw with \(w\ge \{y,z\}\ge x\), there exist positive constants \(\lambda ,\delta \) such that

$$\begin{aligned}k_1(x)=\lambda \quad \text { and }\quad k_2(y)=\delta .\end{aligned}$$

Thus, for all \(y,z>0\),

$$\begin{aligned}\lambda (\phi (y)-\phi (z))=\delta (\varphi (y)-\varphi (z)).\end{aligned}$$

Hence, there are \(\beta >0\) such that \(\phi (x)=\beta \varphi (x)\) for all x. Let \(\alpha =\frac{1}{1+\beta }\). Clearly \(0< \alpha < 1\). According to unique representation, we can normalize \(u(x)=\frac{\phi (x)}{\alpha }\). Therefore, for \(x\le y\),

$$\begin{aligned}\phi (x)+\varphi (y)=\alpha u(x)+(1-\alpha )u(y).\end{aligned}$$

\(\square \)

Lemma B5

There exist \(a\in {\mathbb {R}}\) and \(b>0\) such that for every \(x\in X\),

$$\begin{aligned}u(x) = {\left\{ \begin{array}{ll} a+\displaystyle b\cdot \frac{x^r}{r} &{} \text {for } 0<r <1 \\ a+\displaystyle b\cdot \log x &{} \text {for } r= 0. \end{array}\right. } \end{aligned}$$

Proof

Restricted \(\succsim \) to deterministic profiles. Since \(\succsim \) is continuous and separable on \( X^n\), Roberts (1980) demonstrates that scale invariance implies that function u has the following forms: there are constant a and positive b such that

$$\begin{aligned}u(x) = {\left\{ \begin{array}{ll} a+\displaystyle b\cdot \frac{x^r}{r} &{} \text {for } r> 0 \\ a-\displaystyle b\cdot \frac{x^r}{r} &{} \text {for } r< 0 \\ a+\displaystyle b\cdot \log x &{} \text {for } r= 0. \end{array}\right. } \end{aligned}$$

Note that Pigou-Dalton principle means that for \(x,y,z,w\in X\), if \(x+y=z+w\) and \(|x-y|<|z-w|\), then \(u(x)+u(y)\ge u(z)+u(w)\). This is equivalent to for all \(x<y\) and all \(c>0\)

$$\begin{aligned}u(x+c)-u(x)\ge u(y+c)-u(y),\end{aligned}$$

which implies that u is concave on X. Thus, concavity of u requires that \(r\le 1\). Furthermore, unanimity requires that \(r\ge 0\). Therefore, u must have the expression stated at this lemma. \(\square \)

For \(Y\in {\mathcal {X}}\), denote \(y^*=\displaystyle \max _{y\in Y}y\) and \(y_*=\displaystyle \min _{y\in Y}y\).

Lemma B6

For \(Y\in {\mathcal {X}}\), \( u(Y)=u(y^*,y_*)\).

Proof

Take \(Y\in {\mathcal {X}}\). Since \(\{y^*,y_*\}\subseteq Y\), we know \((Y,\ldots ,Y)\) dominates \((\{y^*,y_*\},\ldots ,\{y^*,y_*\})\). By definition of \(y^*\) and \(y_*\), it is immediate that \((\{y^*,y_*\},\ldots ,\{y^*,y_*\})\) also dominates \((Y,\ldots ,Y)\). Therefore, according to dominance axiom, \((\{y^*,y_*\},\ldots ,\{y^*,y_*\})\sim (Y,\ldots ,Y)\). This is equivalent to \(u(y^*,y_*)=u(Y)\). \(\square \)

Necessity Part:

Suppose that \(\succsim \) is represented by a robust Atkinson SWF W. We want to prove that this preference satisfies A1-9. We only demonstrate commutativity axiom since the rest axioms are straightforward.

Consider \(x_1,x_2,y_1,y_2 \in X\) where \(x_1\ge \{x_2,y_1\}\ge y_2\). Let \(F\in {\mathcal {F}}\) be such that \(F_i=\{e(x_1,x_2),e(y_1,y_2)\}\) for all i. Also, let \(G\in {\mathcal {F}}\) be such that \(G_i=\{e(x_1,y_1),e(x_2,y_2)\}\) for all i. According to the representation function, we have

$$\begin{aligned} u(e(x_1,x_2),e(y_1,y_2))&=\alpha u(x_1,x_2)+(1-\alpha )u(y_1,y_2)\\&=\alpha [\alpha u(x_1)+(1-\alpha )u(x_2)]+(1-\alpha )[\alpha u(y_1)+(1-\alpha )u(y_2)]\\&=\alpha [\alpha u(x_1)+(1-\alpha )u(y_1)]+(1-\alpha )[\alpha u(x_2)+(1-\alpha )u(y_2)]\\&=\alpha u(x_1,y_1)+(1-\alpha ) u(x_2,y_2)\\&=u(e(x_1,y_1),e(x_2,y_2)). \end{aligned}$$

1.3 B.3 Proof of Theorem 2

Since the necessity part is straightforward, we only show the sufficiency part. Suppose that \(\succsim \) satisfies A1-5 and A6’-9’. Our strategy is first to show that \(\succsim \) restricted to deterministic profile have Gini SWF. Then we show that if \(\succsim \) restricted to the profiles in which individual 1 has binary values and all the rest individuals have singleton value, then \(\succsim \) has a robust Gini SWF representation. Finally, we extend this result to the whole set of profiles.

Lemma C1

Let \(\succsim \) restrict to \(X^n\). Then there exists \(\phi \) on \(X^n\) such that

$$\begin{aligned}\phi (f)=\mu (f)-\frac{\sum _i\sum _j|f_i-f_j|}{2n^2},\end{aligned}$$

represents \(\succsim \) on \(X^n\).

Proof

It is clear to see that \(\succsim \) on \(X^n_c\) also satisfies A1-4 and A6’-8’. Therefore, according to Theorem D of Elchanan and Itzhak (1994), there exist \(0<\delta <\frac{1}{n(n-1)}\) and \(\phi \) on \(X^n\) such that for \(f\in X^n\)

$$\begin{aligned}\phi (f)= \mu (f)-\delta \cdot \sum _i\sum _j|f_i-f_j|,\end{aligned}$$

represents \(\succsim \) on \(X^n\). Pick \(c>0\) and \(k\in {\mathcal {N}}\). By A9’ tradeoff, we know \((kc,0,\ldots ,0)\sim (c/k,\ldots ,c/k,0,\ldots ,0)\). The above \(\phi \) function implies that

$$\begin{aligned}\frac{kc}{n}-\delta (n-1)kc=\frac{c}{n}-\delta \cdot 2k(n-k)\frac{c}{k}.\end{aligned}$$

Therefore, the only solution is

$$\begin{aligned}\delta =\frac{1}{2n^2}.\end{aligned}$$

\(\square \)

We denote

$$\begin{aligned} {\mathcal {F}}^1=\{F\in {\mathcal {F}}: F_1=\{a,b\}\text { and }F_i=\{c\}, \forall i\ne 1\text { and }a,b,c\in X \text { with }a,b\ge c\}. \end{aligned}$$

the set of all profiles in which individual 1 is the richest with two possible allocations in the society, and the rest in the society have deterministic and equalized allocation. Note that for \(F\in {\mathcal {F}}^1\), if \(a=b\), then F is a deterministic profile; and if \(a=b=c\), then F is a deterministic equally distributed profile.

Lemma C2

If \(F,G\in {\mathcal {F}}^1\), then F and G are order-preserving.

Proof

This follows immediately from the definition of order-preserving. \(\square \)

Lemma C3

For \(F,f\in {\mathcal {F}}^1\), if \(F\sim f\), then \(\alpha F+(1-\alpha )f\sim f\) for all \(\alpha \in (0,1)\).

Proof

Pick \(F\in {\mathcal {F}}^1\) be such that \(F_1=\{a,b\}\), \(F_i=\{c\}\) for \(i\ne 1\) and \(a\ge b\ge c\). If there is a deterministic profile \(f\in {\mathcal {F}}^1\) be such that \(F\sim f\), we should have \(\frac{F}{2}\sim \frac{f}{2}\). To see this, suppose not. Assume that \(\frac{F}{2}\succ \frac{f}{2}\). Since \(\frac{F}{2},\frac{f}{2}\in {\mathcal {F}}^1\), A6’ order-preserving independence implies that

$$\begin{aligned} \frac{F}{2}+\frac{F}{2}\succ \frac{F}{2}+\frac{f}{2}\succ \frac{f}{2}+\frac{f}{2}=f. \end{aligned}$$

Notice that

$$\left( {{\frac{F}{2}} + {\frac{F}{2}}} \right)_1 = \left\{ {a,{\frac{{a + b}}{2}},b} \right\}\quad {{\rm and}}\quad \left( {{\frac{F}{2}} + {\frac{F}{2}}} \right)_i = c.$$

Recall that the representation of \(\succsim \) restricted on deterministic profile can also be written as

$$\begin{aligned}\phi (f)=\frac{1}{n^2}\sum _{i=1}^n [(2(n-k)+1]{\tilde{f}}_i\end{aligned}$$

Therefore, \(\overline{F}=(a, c,\ldots ,c)\) is the most preferred deterministic profile in both F and \(\frac{F}{2}+\frac{F}{2}\), i.e.

$$\begin{aligned} \overline{F}=(a,c,\ldots ,c)\in \arg \max _{f\in F}\phi (f)\qquad \text { and }\qquad \overline{F}=(a,c,\ldots ,c)\in \arg \max _{f\in \frac{F}{2}+\frac{F}{2}}\phi (f);\end{aligned}$$

and \(\underline{F}=(b,c,\ldots ,c)\) is the least preferred deterministic profile in both F and \(\frac{F}{2}+\frac{F}{2}\). Hence, F and \(\frac{F}{2}+\frac{F}{2}\) dominates each other. According to A6’, \(F\sim \frac{F}{2}+\frac{F}{2}\), which contradicts the assumption that \(F\sim f\). Now assume that \(\frac{f}{2}\succ \frac{F}{2}\). We repeat the similar process as above and lead to a contradiction. Hence, \(F\sim f\) implies \(\frac{F}{2}\sim \frac{f}{2}\).

Proceeding with induction, we have for every integer \(k=1,2,\ldots \)

$$\begin{aligned} \frac{F}{k}\sim \frac{f}{k}. \end{aligned}$$

Also, by A6’,

$$\begin{aligned} F\sim f\Rightarrow F+F\sim F+f\sim f+f=2f. \end{aligned}$$

Observe that \((2F)_1=\{2a,2b\}\), \((F+F)_1=\{2a,a+b,2b\}\) and \((2F)_i=(F+F)_i=\{2c\}\) for \(i\ne 1\). Since \(a\ge b\), it is immediate that 2F and \(F+F\) dominates each other, therefore, \(2F\sim F+F\). Hence \(F\sim f\) implies \(2F\sim 2f\). By induction, we have for every integer k,

$$\begin{aligned} kF=kf. \end{aligned}$$

Combine the results abover, for every positive rational number \(\alpha \), we have

$$\begin{aligned} \alpha F\sim \alpha f. \end{aligned}$$

Continuity implies that the above result holds for every positive real number \(\alpha \). Now, take any \(\alpha \in (0,1)\) and apply A6’,

$$\begin{aligned} \alpha F\sim \alpha f\Leftrightarrow \alpha F+(1-\alpha )f\sim f. \end{aligned}$$

\(\square \)

Recall that for \(F\in {\mathcal {F}}\), \(\overline{F}\) and \(\underline{F}\) represents the upper limit and lower limit distribution in F, respectively.

Lemma C4

There exists \(\alpha \in [0,1]\) such that for all \(F\in {\mathcal {F}}^1\),

$$\begin{aligned} F\sim \alpha \overline{F}+(1-\alpha )\underline{F}. \end{aligned}$$

Proof

If \(x\in X\), we define \({\mathcal {F}}^1(x)=\{F\in {\mathcal {F}}^1: F_i=\{x\}\text { for all }i\ne 1\}\) denote the collection of profiles in \({\mathcal {F}}^1\) in which, except individual 1, every individual have equal allocation. Therefore, \({\mathcal {F}}^1=\cup _{x\in {\mathbb {R}}} {\mathcal {F}}(x)\). We first show that the result holds on the restricted domain \({\mathcal {F}}^1(0)\).

Referring to Fig. 3. For \(f\in {\mathcal {F}}^1(0)\) with \(F_1=\{a,b\}\), F can be identified by the point (ab) if \(a>b\). Similarly, for \(F\in {\mathcal {F}}^1(0)\) with \(F_1=\{c\}\), F can be identified by the point (cc). Therefore, there is one-to-one correspondence between set \({\mathcal {F}}^1(0)\) and the points between horizontal axis and diagonal. For every \(F,G\in {\mathcal {F}}^1(0)\), where \(F_1=\{a,b\}\) and \(G_1=\{c,d\}\), we define

$$\begin{aligned} (a,b)\succsim (c,d)\Leftrightarrow F\succsim G. \end{aligned}$$

Take \(a>b\). We have

$$\begin{aligned} (a,a)\succ (b,b). \end{aligned}$$

By definition, we know that profile (aa) dominates (ab) and (ab) dominates (bb). Therefore, A6’ implies

$$\begin{aligned} (a,a)\succsim (a,b)\succsim (b,b). \end{aligned}$$

Continuity implies that there exists \(\alpha \in [0,1]\) such that

$$\begin{aligned} (\alpha a+(1-\alpha )b,\alpha a+(1-\alpha )b)\sim (a,b). \end{aligned}$$

Let \(\alpha a+(1-\alpha )b=c\). Lemma C3 implies that any points on the straight line between (cc) and (ab) are indifferent. Therefore, every indifferent curve must be a straight line.

Now, we need to show that every indifferent lines parallel to each other. Take any point \((a',b')\). Connect points \((a',b')\) and (0, 0) by a straight line. Without loss of generality, suppose this line intersects the indifference curve, line between (cc) and (ab), at point (ab). Therefore, there exists unique \(\beta >0\) such that

$$\begin{aligned} (a',b')=(\beta a,\beta b). \end{aligned}$$

Since \((a,b)\sim (c,c)\), Lemma C3 implies that

$$\begin{aligned} (\beta a,\beta b)\sim (\beta c,\beta c). \end{aligned}$$

Therefore, \((a',b')\sim (\beta c,\beta c)\), which means that two indifferent curves \(\ell _1,\ell _2\) parallel to each other.

To finish our proof, we now extend the result from domain \({\mathcal {F}}^1(0)\) to \({\mathcal {F}}^1\). Pick any abc such that \(a\ge b\ge c>0\). Consider a profile \(F\in {\mathcal {F}}^1\) being such that \(F_1=\{a-c,b-c\}\) and \(F_i=\{0\}\) for \(i\ne 1\). Clearly, such F belongs to \({\mathcal {F}}^1(0)\) and, therefore,

$$\begin{aligned} (a-c,b-c)\sim (\alpha (a-c)+(1-\alpha )(b-c),\alpha (a-c)+(1-\alpha )(b-c)). \end{aligned}$$

Now, adding constant deterministic profile \((c,\ldots ,c)\) on both proifles, A6’ implies that

$$\begin{aligned} F\sim (\alpha a+(1-\alpha )b,c,\ldots ,c). \end{aligned}$$

Since \(\overline{F}=(a,c,\ldots ,c)\) and \(\underline{F}=(b,c\ldots ,c)\), we have \(F\sim \alpha \overline{F}+(1-\alpha )\underline{F}\). \(\square \)

Fig. 3
figure 3

Indifference curve on \({\mathcal {F}}^1(0)\)

We now define a real valued function W on \({\mathcal {F}}^1\) by, for \(F\in {\mathcal {F}}\),

$$\begin{aligned}W(F)=\phi (\alpha \overline{F}+(1-\alpha )\underline{F}).\end{aligned}$$

It is immediate to see that W represents \(\succsim \) restricted on \({\mathcal {F}}^1\). Notice that for each \(F\in {\mathcal {F}}\), \(\overline{F}\) and \(\underline{F}\) are order-preserving. By the order-preserving additivity and homogeneity of \(\phi \), we have

$$\begin{aligned}W(F)=\alpha \phi (\overline{F})+(1-\alpha )\phi (\underline{F})\end{aligned}$$

represents \(\succsim \) on \({\mathcal {F}}^1\).

Lemma C5

For \(F\in {\mathcal {F}}\) and \(G\in {\mathcal {F}}^1\), if \(\overline{F}\sim \overline{G}\) and \(\underline{F}\sim \underline{G}\), then \(F\sim G\).

Proof

Since both F and G dominate each other, it is immediate that \(F\sim G\) according to A5. \(\square \)

Now, we can extend real-valued function W to the whole set \({\mathcal {F}}\) by for \(F\in {\mathcal {F}}\) if there is \(G\in {\mathcal {F}}^1\) such that \(\overline{F}\sim \overline{G}\) and \(\underline{F}\sim \underline{G}\), then

$$\begin{aligned} W(F)=\alpha \phi (\overline{F})+(1-\alpha )\phi (\underline{F}). \end{aligned}$$

We claim that W represents the \(\succsim \) on \({\mathcal {F}}\). To see this, note that by continuity, for every \(F\in {\mathcal {F}}\), there must exist \(F^1\in {\mathcal {F}}^1\) such that \(\overline{F}\sim \overline{F^1}\) and \(\underline{F}\sim \underline{F^1}\). Take any \(F,G\in {\mathcal {F}}\). According to Lemma C5, we have

$$\begin{aligned} \begin{aligned} F\succsim G&\Longleftrightarrow F^1\succsim G^1\\&\Longleftrightarrow W(F^1)\ge W(G^1)\\&\Longleftrightarrow \alpha \phi (\overline{F^1})+(1-\alpha )\phi (\underline{F^1})\ge \alpha \phi (\overline{G^1})+(1-\alpha )\phi (\underline{G^1})\\&\Longleftrightarrow \alpha \phi (\overline{F})+(1-\alpha )\phi (\underline{F})\succsim \alpha \phi (\overline{G})+(1-\alpha )\phi (\underline{G})\\&\Longleftrightarrow W(F)\ge W(G). \end{aligned} \end{aligned}$$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qu, X. Inequality measurement with coarse data. Soc Choice Welf 62, 367–396 (2024). https://doi.org/10.1007/s00355-023-01492-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00355-023-01492-0

Navigation