A proof of the Shepp-Olkin entropy monotonicity conjecture

Consider tossing a collection of coins, each fair or biased towards heads, and take the distribution of the total number of heads that result. It is natural to conjecture that this distribution should be 'more random' when each coin is fairer. Indeed, Shepp and Olkin conjectured that the Shannon entropy of this distribution is monotonically increasing in this case. We resolve this conjecture, by proving that this intuition is correct. Our proof uses a construction which was previously developed by the authors to prove a related conjecture of Shepp and Olkin concerning concavity of entropy. We discuss whether this result can be generalized to $q$-R\'{e}nyi and $q$-Tsallis entropies, for a range of values of $q$.


Introduction and notation
In this paper, we consider the entropy of Poisson-binomial random variables (sums of independent Bernoulli random variables). Given parameters p = (p 1 , . . . , p n ) (where 0 ≤ p i ≤ 1) we will write f p for the probability mass function of the random variable B 1 + . . . + B n , where B i are independent with B i ∼ Bernoulli(p i ). We can write the Shannon entropy as a function of the parameters: (1) Shepp and Olkin [16] made the following conjecture "on the basis of numerical calculations and verification in the special cases n = 2, 3": where p n (t) = p n + t, omit the subscript on f p(t) for brevity and write ∂f ∂t (k) = g(k − 1) − g(k), for k = 0, 1, . . . , n, where g is the probability mass function of B 1 + . . . + B n−1 , which is supported on the set {0, . . . , n − 1} and does not depend on t. Here and throughout we take g(−1) = g(n) = 0 if necessary. As in [16,Theorem 2] we can use (2) to evaluate the first two derivatives of H(p(t)) as a function of t. Direct substitution gives The negativity of each term in (4) tells us directly that (as proved in [16,Theorem 2]) the entropy H(p(t)) is concave in t (of course, we also know this from the full Shepp-Olkin theorem proved in [7,8]) so it is sufficient to prove that the derivative ∂H ∂t is non-negative in the case p n = 1/2, since the derivative is therefore larger for any smaller values of p n .
However, at this stage, further progress is elusive. Considering convolution with B n means that we can express f (k) = (g(k) + g(k − 1))/2. However, substituting this in (3) does not suggest an obvious way forward in general, though it is possible to use the resulting formula to resolve certain special cases. For example, careful cancellation in the case where p 1 = p 2 = . . . = p n−1 = 1/2 and hence g is binomial allows us to deduce that, in this case, the entropy derivative (3) equals zero (see Example 2.7 below for an alternative view of this). However, this calculation does not give any particular insight into why the binomial case might be extreme in the sense of the conjecture.
Instead of expressing f as a linear combination of g, our key observation is that we can express g as a weighted linear combination of f , as described in the following section.

Entropy derivative and mixing coefficients
The following construction and notation were introduced in [7], based on the 'hypergeometric thinning' construction of Yu [18,Definition 2.2]. The key is to observe that in general we can write for certain 'mixing coefficients' (α) k=0,1,...,n . The general construction for (α) k=0,1,...,n in the case of Shepp-Olkin paths is given in [7, Proposition 5.1], but in the specific case where only p n varies, in the case p n = 1/2, we can simply define the following values: Definition 2.1. For k = 0, . . . , n, define .
In [7,Proposition 5.2] this result was stated in the form α k−1 ≤ α k , but the strict inequalities will help us to resolve the case of equality in Conjecture 1.1. It will often be useful for us to observe that Definition 2.1 implies that for k = 0, 1, . . . , n and that for k = 0, 1, . . . , n − 1 Remark 2.2. Summing (8), we can directly calculate that which will play an important role in our proof of Conjecture 1.1. Further, it is interesting to note by rearranging (6) that α k ≤ 1/2 if and only if g(k − 1) ≤ g(k), which by the unimodality of g (see for example [12]) means that k ≤ mode(g). This may suggest that the Shepp-Olkin conjecture can be understood as relating to the skewness of the random variables g. Direct calcuation shows that the centred third moment of B 1 + . . . + B n−1 is but it is not immediately clear how this positive skew will affect the entropy of f . Remark 2.3. In [7] we used these mixing coefficients (α) k=0,1,...,n to formulate a discrete analogue of the Benamou-Brenier formula [2] from optimal transport theory, which gave an understanding of certain interpolation paths of discrete probability measures (including Shepp-Olkin paths) as geodesics in a metric space. We do not require this interpretation here, but simply study the properties of α k in their own right.
We now define a function which will form the basis of our proof of Conjecture 1.1: where we take 0 log 0 = 0 to ensure that ψ is continuous at 0 and at 1.
We can express the derivative of entropy in terms of these functions and the mixing coefficients, as follows: Hence by (11), the entropy derivative (3) is positive if each of the odd centred moments n k=0 f (k)(α k − 1/2) 2r+1 ≤ 0, for r = 1, 2, . . .. Proof. Using the fact that g(−1) = g(n) = 0 and adding cancelling terms into the sum (3), we can use g(k)/(2f (k)) = 1 − α k and g(k − 1)/(2f (k)) = α k (see (8)) to obtain that since α 0 = 0 and α n = 1 so that α 0 log Since (by (10) above) the n k=0 f (k)α k = 1/2, subtracting off the linear term makes no difference to the sum and we can rewrite (13) as 2 n k=0 f (k)ψ(α k ), as required. We can exchange the order of summation in because of Fubini's theorem, since as mentioned above the power series for ψ converges absolutely. Hence if each odd centered moment is negative then the entropy derivative (3) is positive.
We shall argue that the binomial example, Example 2.7, represents the extreme case using the following property, which will be key for us: Proof. See Appendix A.
Note that comparing averages, and taking the values of α 0 = 0 and α n = 1 from (7), Proposition 2.8 implies that or that if all the p i ≤ 1/2 then α k ≥ k/n for 0 ≤ k ≤ n, showing that the binomial distribution of Example 2.7 is the extreme case in this sense.

Proof of Shepp-Olkin monotonicity conjecture
We are now in a position to complete our proof of Conjecture 1.1.
Definition 3.1. First we introduce some further notation: As proven in (7) and Proposition 2.8 respectively, the sequence (β k ) k is non-decreasing and (β k+1 − β k ) k is non-increasing.
3. We define the family (B p (k)) k by B 0 (k) = 1 and, for 1 For other values of k, B p (k) is not defined. 4. The notation ∇ stands for the left-derivative operator: ∇v(k) = v(k) −v(k −1). This operator satisfies a product rule of the form: 5. For n ≥ 1 and p ≥ 1 we define the polynomial (symmetric in its inputs) where the sum is taken over all the p-tuples 0 ≤ i 1 , . . . , i p ≤ n such that i 1 + . . . + i p = n. We also set Q 0,p = 1 and Q −1,p = 0 for all p ≥ 1. Clearly, Q n,p (X 1 , . . . , X p ) is non-negative if X 1 , . . . , X p are non-negative.
We now state three technical lemmas that we will require in the proof; each of these are proved in Appendix B. First, the fact that B p (k) is decreasing in k: We next give an integration by parts formula. Note that although we restrict the range of summation for technical reasons, the values A p (k) and A p+1 (k) are zero outside the respective ranges: Lemma 3.3. For any function v(k) that is well-defined on p ≤ k ≤ n − 1 and any p ≥ 0 we have Finally, a result concerning differences of the Q polynomials: Lemma 3.4. For n ≥ 0 and p ≥ 1, we have We now state and prove the key Proposition: is increasing in p. Here note that S r,r+1 = 0 since Q −1,p+1 = 0.
(Note that by restricting to this range of summation the B p (k) is well-defined).
Proof. We first take v(k) = B p (k)Q r−p,p+1 (β 2 k+1 , . . . , β 2 k−p+1 ) in the integration by parts formula (17) from Lemma 3.3 to write where here and throughout the proof, the ∇ refers to a difference in the k parameter. Now, using the product rule (15) with v(k) = B p (k) and w(k) = Q r−p,p+1 (β 2 k+1 , . . . , β 2 k−p+1 ) yields: In order to transform the first sum, we use equation (18) from Lemma 3.4: . Further Lemma 3.2 gives that ∇B p (k) ≤ 0 for k ≥ p and Q r−p,p+1 (β 2 k , . . . , β 2 k−p ) ≥ 0, so the second sum is ≤ 0. We finally conclude that We are now able to prove the following theorem, which confirms Conjecture 1.1: Theorem 3.6. If all p i ≤ 1/2 then H(p) is a non-decreasing function of p. Equality holds if and only if each p i equals 0 or 1 2 . Proof. As described in Proposition 2.6, it is sufficient for us to prove that for every r ≥ 1 we have Using (8) we know that α k = g(k − 1)/(2f (k)) and 1 − α k = g(k)/(2f (k)), so that (subtracting these two expressions) This means that, using a standard factorization of β 2r k+1 − β 2r k , since g(−1) = g(n) = 0 we can write = S r,1 .
However, Proposition 3.5 gives S r,1 ≤ S r,r+1 = 0, and we are done. Note that an examination of (20) allows us to deduce conditions under which equality holds for the cubic case (r = 1). In this case we can rewrite (20) using the integration by parts formula (17) as Here g(k) and α k are positive for k ≥ 1, and Proposition 2.8 tells us that the second bracket is negative, and so the centered third moment equals zero if and only β k+1 − β k is constant in k, which means that α k = k/n. However, (10) tells us that this implies that so that equality can hold if and only if p i ≡ 1/2.

Monotonicity of Rényi and Tsallis entropies
As in [8,Section 4] where a similar discussion considered the question of concavity of entropies, we briefly discuss whether Theorem 3.6 may extend to prove that q-Rényi and q-Tsallis entropies are always increasing functions of p for p i ≤ 1/2. We make the following definitions, each of which reduce to the Shannon entropy (1) as q → 1.
Definition 4.1. For f p as defined above, for 0 ≤ q ≤ ∞ define 1. q-Rényi entropy [14]: 2. q-Tsallis entropy [17]: Note that, unlike the concavity case of [8,Section 4], since they are both monotone functions of n x=0 f p (x) q , both H R,q (p) and H T,q (p) will be increasing in p in the same cases. We can provide analogues of (3) and (4) by (for q = 1) Again, the second term is negative, and therefore H T,q (p) will be increasing for all p n ≤ 1/2 if it is increasing in the case p n = 1/2. Clearly for q = 0 (24) shows that the entropy is constant (indeed we know that in this case H R,q = log(n + 1) and H T,q = n). Curiously, we can simplify (24) in the case of collision entropy (q = 2) by substituting for f as a linear combination of g (which is the argument that did not work for q = 1).
Proof. In (24) we obtain which is equal to the term stated in (26) by relabelling. Note that (curiously) this property will hold for any g, including the mass function of any B 1 + . . . + B n−1 (not necessarily with p i < 1/2).
It may be natural to conjecture that Tsallis (and hence Rényi) entropy is increasing for all q. However, the following example shows that this property in fact can fail for q > 2 (note that Rényi entropy is not concave in the same range -see [8,Lemma 4.3]). gives that the entropy derivative is exactly and we note that 2 q − 2q ≥ 0 for q > 2, so the leading coefficient is negative and so the derivative will be negative for ǫ sufficiently small.
However, we conjecture that these entropies are increasing for 0 ≤ q ≤ 2, since we know that the result holds for q = 0, 1, 2: Conjecture 4.4. If all p i ≤ 1/2 then Tsallis entropy H T,q (p) and Rényi entropy H R,q (p) are non-decreasing functions of p for 0 ≤ q ≤ 2.
We use an argument similar to that which gave Proposition 2.6 to give a moment-based condition related to this conjecture. Proposition 4.5. Let us fix 0 < q < 2. If, for all r ≥ 1, then ∂H T,q ∂t ≥ 0 holds.
Proof. We first add a telescoping sum in equation (24): where, using the binomial theorem, the function ψ q can be expressed as From the assumption 0 < q < 2, it follows that q 2r i=2 (q − i) < 0. The proof is completed as in Proposition 2.6.

B Proof of technical lemmas
Proof of Lemma 3.2. Given j ≥ 0, each sequence (β k+1 − β k−j ) k is non-negative and nonincreasing, so any product of such sequences is also non-increasing. In more detail, the , and each term in the product is positive by (7). Further, each of these terms is well-defined since k − j − 1 ≥ k − p ≥ 0.
To prove Lemma 3.4, we observe that equivalently, the family of polynomials (Q n,p (X 1 , . . . , X p )) n≥0 can be defined using the generating function: Proof of Lemma 3.4. We first notice by direct calculation that .

C Heuristics in continuous case
We now explain some calculations in the continuous case that helped us to find a rigorous proof of Theorem 3.6, and that help suggest our conjecture about Renyi and Tsallis entropies. We remark that Ordentlich [13] used the original paper of Shepp and Olkin [16] to motivate conjectures concerning continuous random variables. Let us consider a density function f (x), defined for x ∈ R, which is assumed to be everywhere positive, smooth and with all derivatives well-behaved at ±∞. This density will serve as a continuous analogue of both the mass functions (f (k)) k and (g(k)) k . As a consequence, one could also see the function 1 2 − log(f ) ′ (x) 4 (resp. − log(f ) ′ (x) 4 ) as continuous analogues of the family (α) k (resp. (β) k ). We will make the assumptions that (log f ) ′′ ≤ 0 and (log f ) ′′′ ≥ 0, which correspond to the property α k ≤ α k+1 and α k − 2α k+1 + α k+2 ≤ 0. We now prove the continuous version of equation (19) (in the case where q = 1) and (28) (for q = 1): Proposition C.1. Suppose that (log f ) ′′ ≤ 0 and (log f ) ′′′ ≥ 0 then for every real parameter q > 0 and every integer r ≥ 1 we have: Proof. For any 0 ≤ p ≤ r we set with A 0 = 1 and A p = p−1 k=0 (2r − 2k) for p ≥ 1. In particular A r = 0 so I r,r = 0. We now prove that the sequence (I r,p ) is non-increasing in p: for every 0 ≤ p ≤ r − 1, using the fact that f q (log f ) ′ = f q f ′ /f = f q−1 f ′ = (f q ) ′ /q we have: Here, again we apply integration by parts, followed by the product rule. The first integral is exactly I r,p+1 , and the second one is non-negative because of the assumptions on log f ′′ and log f ′′′ . We thus have I r,p ≥ I r,p+1 . We thus have R f (x) q (log f (x) ′ ) 2r+1 dx = I r,0 ≥ I r,r = 0, which proves the result.