Conditional Rényi Entropy and the Relationships between Rényi Capacities

The analogues of Arimoto’s definition of conditional Rényi entropy and Rényi mutual information are explored for abstract alphabets. These quantities, although dependent on the reference measure, have some useful properties similar to those known in the discrete setting. In addition to laying out some such basic properties and the relations to Rényi divergences, the relationships between the families of mutual informations defined by Sibson, Augustin-Csiszár, and Lapidoth-Pfister, as well as the corresponding capacities, are explored.


Introduction
Shannon's information measures (entropy, conditional entropy, the Kullback divergence or relative entropy, and mutual information) are ubiquitous both because they arise as operational fundamental limits of various communication or statistical inference problems, and because they are functionals that have become fundamental in the development and advancement of probability theory itself. Over a half century ago, Rényi [1] introduced a family of information measures extending those of Shannon [2], parametrized by an order parameter α ∈ [0, ∞]. Rényi's information measures are also fundamentalindeed, they are (for α > 1) just monotone functions of L p -norms, whose relevance or importance in any field that relies on analysis need not be justified. Furthermore, they show up in probability theory, PDE, functional analysis, additive combinatorics, and convex geometry (see, e.g., [3][4][5][6][7][8][9]), in ways where understanding them as information measures instead of simply as monotone functions of L p -norms is fruitful. For example, there is an intricate story of parallels between entropy power inequalities (see, e.g., [10][11][12]), Brunn-Minkowski-type volume inequalities (see, e.g., [13][14][15]) and sumset cardinality (see, e.g., [16][17][18][19][20]), which is clarified by considering logarithms of volumes and Shannon entropies as members of the larger class of Rényi entropies. It is also recognized now that Rényi's information measures show up as fundamental operational limits in a range of information-theoretic or statistical problems (see, e.g., [21][22][23][24][25][26][27]). Therefore, there has been considerable interest in developing the theory surrounding Rényi's information measures (which is far less well developed than the Shannon case), and there has been a steady stream of recent papers [27][28][29][30][31][32][33][34][35][36] elucidating their properties beyond the early work of [37][38][39][40]. This paper, part of which was presented at ISIT 2019 [41], is a further contribution along these lines.
More specifically, three notions of Rényi mutual information have been considered in the literature (usually named after Sibson, Arimoto and Csiszár) for discrete alphabets. Sibson's definition has also been considered for abstract alphabets, but Arimoto's definition has not. Indeed Verdú [31] asserts: "One shortcoming of Arimoto's proposal is that its generalization to non-discrete alphabets is not self-evident." The reason it is not self-evident is because although there is an obvious generalized definition, the mutual information arising from this notion depends on the choice of reference measure on the abstract alphabet, which is not a desirable property. Nonetheless, the perspective taken in this note is that it is still interesting to develop the properties of the abstract Arimoto conditional Rényi entropy keeping in mind the dependence on reference measure. The Sibson definition is then just a special case of the Arimoto definition where we choose a particular, special reference measure.
Our main motivation comes from considering various notions of Rényi capacity. While certain equivalences have been shown between various such notions by Csiszár [21] for finite alphabets and Nakiboglu [36,42] for abstract alphabets, the equivalences and relationships are further extended in this note.
This paper is organized in the following manner. In Section 2 below we begin by defining conditional Rényi entropy for random variables taking values in a Polish space. Section 3 presents a variational formula for the conditional Rényi entropy in terms of Rényi divergence, which will be a key ingredient in several results later. Basic properties that the abstract conditional Rényi entropy satisfies akin to its discrete version are proved in Section 4, including description for special orders 0, 1 and ∞, monotonicity in the order, reduction of entropy upon conditioning, and a version of the chain rule. Section 5 discusses and compares several notions of α-mutual information. The various notions of channel capacity arising out of different notions of α-mutual information are studied in Section 6, which are then compared using results from the preceding section.

Definition of Conditional Rényi Entropies
Let S be a Polish space and B S its Borel σ-algebra. We fix a σ-finite reference measure γ on (S, B S ). Our study of entropy and in particular all L p spaces we talk about will be with respect to this measure space, unless stated otherwise. Definition 1. Let X be an S-valued random variable with density f with respect to γ. We define the Rényi entropy of X of order α ∈ (0, 1) It will be convenient to write down Rényi entropy as where Ren γ α (X) = f α α−1 α will be called the Rényi probability of order α of X. Let T be another Polish space with a fixed measure η on its Borel σ-algebra B T . Now suppose X, Y are, resepectively, S, T-valued random variables with a joint density F : S × T → R w.r.t. the reference γ ⊗ η. We will denote the marginals of F on S and T by f and g respectively. This in particular means that X has density f w.r.t. γ and Y has density g w.r.t. η. Just as Rényi probability of X, one can define the Rényi probability of the conditional X given y = y by the expression Ren γ α (X|Y = y) = F(·,y) The following generalizes ( [30], Definition 2).

Definition 2.
Let α ∈ (0, 1) ∪ (1, ∞). We define the conditional Rényi entropy h γ α (X|Y) in terms of a weighted mean of conditional Rényi probabilities Ren We can re-write Ren γ α (X|Y) as which is the expected L α (S, γ) norm of the conditional density under the measure P Y raised to a power which is the Hölder conjugate of α. Using Fubini's theorem the formula for Ren γ α (X|Y) can be further written down only in terms of the joint density, Remark 1. Suppose P X|Y=y , for each y ∈ supp(g), denotes the conditional distribution of X given Y = y, i.e., the probability measure on S with density F(x, y)/g(y) with respect to γ, then the conditional Rényi entropy can be written as h where D α (· ·) denotes Rényi divergence (see Definition 3).
When X and Y are independent random variables one can easily check that Ren Since the independence of X and Y means that all the conditionals P X|Y=y are equal to P X , the fact that h γ α (X|Y) = h γ α (X) in this case can also be verified from the expression in Remark 1. The converse is also true, i.e., h γ α (X|Y) = h γ α (X) implies the independence of X and Y, if α = 0, ∞. This is noted later in Corollary 2.
Clearly, unlike conditional Shannon entropy, the conditional Rényi entropy is not the average Rényi entropy of the conditional distribution. The average Rényi entropy of the conditional distribution,h γ α (X|Y) := E Y h γ α (X|Y = y), has been proposed as a candidate for conditional Rényi entropy, however it does not satisfy some properties one would expect such a notion to satisfy, like monotonicity (see [30]). When α ≥ 1 it follows from Jensen's inequality that h γ α (X|Y) ≤h γ α (X|Y), while the inequality is reversed when 0 < α < 1.

Relation to Rényi Divergence
We continue to consider an S-valued random variable X and a T-valued random variable Y with a given joint distribution P X,Y with density F with respect to γ ⊗ η. Densities, etc are with respect to the fixed reference measures on the state spaces, unless mentioned otherwise.
Let µ be a Borel probability measure with density p on a Polish space Ω and let ν be a Borel measure with density q on the same space with respect to a common measure γ.
. Then, the Rényi divergence of order α between measures µ and ν is defined as For order 0, 1, ∞, the Rényi divergence is defined by the respective limits.

Remark 2.
These definitions are independent of the reference measure γ.
The conditional Rényi entropy can be written in terms of Rényi divergence from the joint distribution using a generalized Sibson's identity we learnt from B. Nakiboglu [43] (also see [36], and [38] where this identity for α = 1 appears to originate from). The proof for abstract alphabets presented here is also due to B. Nakiboglu [43], which simplifies our original proof [41] of the second formula below. Theorem 1. Let X, Y be random variables taking vaules in spaces S, T respectively. We assume they are jointly distributed with density F with respect to the product reference measure γ ⊗ η. For α ∈ (0, ∞), and any probability measure λ absolutely continuous with respect to η, we have Proof. Suppose λ has density h with respect to η. Then γ ⊗ λ has density h(y) with respect to dγ(x) dη(y). Now, for α = 1, The case α = 1 is straightforward and well-known, and the optimal q in this case is the distribution of Y.

Remark 4.
The identities above and the measure q are independent of the reference measure η. η is only used to write out the Rényi divergence concretely in terms of densities.

Special Orders
We will now look at some basic properties of the conditional Rényi entropy we have defined above. First we see that the conditional Rényi entropy is consistent with the notion of conditional Shannon entropy of X given Y defined by .
Proof. We will use the formula in Theorem 1. By the monotonicity in the order α of h α (X|Y), all limits lim α→1 + , lim α→1 − , lim α→0 = lim α→0 + exist. Furthermore, for every η, Now by minimizing over η and hitting both sides with a minus sign yields, Suppose α ≥ 1, then by nondecreasing-ness of the Rényi divergence in order, for every λ we have and so by minimization over λ and hitting with minus signs we obtain h γ α (X|Y) ≤ h γ (X|Y). This shows that lim We can extend our definition of Rényi probability of order α to α = 0 by taking limits, thereby obtaining In the next proposition we define the conditional Rényi entropy of order 0 and record a consequence.
Proof. We will again use the formula from 1 in this proof. Just as in the last proof, for every probability measure η, min Now by minimizing over η and hitting both sides with a minus sign yields, Suppose α ≥ 0, then by nondecreasing-ness of the Rényi divergence in order, for every λ we have and so by minimization over λ and hitting with minus signs we obtain h

Monotonicity in Order
The (unconditional) Rényi entropy decreases with order, and the same is true of the conditional version.

Proposition 3. For random variables X and Y,
The proof is essentially the same as in the discrete setting, and follows from Jensen's inequality.
The first inequality above follows from the fact that the unconditional Rényi entropy (probability) decreases (increases) with order. Note that e ≥ 1 when 1 < α ≤ β < ∞ and hence the function r → r e is convex making the second inequality an application of Jensen's inequality in this case. When 0 < α ≤ β < 1, the exponent satisfies 0 < e ≤ 1 so the function r → r e is concave but the outer exponent β β−1 is negative which turns the (second) inequality in the desired direction.

Conditioning Reduces Rényi Entropy
As is the case for Shannon entropy, we find that the conditional Rényi entropy obeys monotonicity too; the proof of the theorem below adapts the approach for the discrete case taken in [30] by using Minkowski's integral inequality.

Theorem 2.
[Monotonicity] Let α ∈ [0, ∞], and X be S-valued and Y, Z be T-valued random variables. Then, Proof. We begin by proving the result for an empty Z. First, we deal with the case 1 < α < ∞. In terms of Rényi probabilities we must show that conditioning increases Rényi probability. Indeed, The inequality above is a direct application of Minkowski's integral inequality ( [44], Theorem 2.4), which generalizes the summation in the standard triangle inequality to integrals against a measure.
For the case 0 < α < 1, we apply the triangle inequality for 1/α then the fact that now 1 α−1 is negative flips the inequality in the desired direction: Now to extend this to non-empty Z we observe the following. Suppose V is an S-valued random variable, W is a T-valued random variable and h ∈ R is such that for every z in the support of P Z . Then In terms of Renyi probabilities this means that if for every z ∈ supp(P Z ), since the functions r → r α−1 α and r → r α α−1 are both strictly increasing or strictly decreasing, based on the value of α. Finally, the claim for non-empty Z follows from this observation given we have already demonstrated h γ α (X|Y, Z = z) ≤ h γ α (X|Z = z) throughout supp(Z). The cases α = 0, ∞ are obtained by taking limits. For α = 1 this is well-known.
When we specialize to "empty Z" (i.e., the σ-field generated by Z being the Borel σ-field on T, or not conditioning on anything), we find that "conditioning reduces Rényi entropy".
, with equality iff X and Y are independent.
While the inequality in Corollary 2 follows immediately from Theorem 2, the conditions for equality follow from those for Minkowski's inequality (which is the key inequality used in the proof of Theorem 2, see, e.g., ([44], Theorem 2.4)): given the finiteness of both sides in the display (1), equality holds if and only if F(x, y) = φ(x)ψ(y) γ ⊗ η-a.e. for some functions φ and ψ. In our case, this means that equality holds in h γ α (X|Y) ≤ h γ α (X) if and only if X and Y are independent (α ∈ (0, 1) ∪ (1, ∞)). The corresponding statement for α = 1 is well-known. Sinceh γ α (X|Y) ≤ h γ α (X|Y) when 0 < α < 1, as noted in Section 2, we have "conditioning reduces Rényi entropy" in this case as well.

Remark 5. The above corollary is not true for large values of α. For a counter-example, see ([28], Theorem 7).
From the special case when Y is discrete random variable taking finitely many values y i with probability p i , 1 ≤ i ≤ n, and the conditional density of X given Y = y i is f i (x), we obtain the concavity of Rényi entropy (for orders below 1) which is already known in the literature.

A chain Rule
In this subsection we deduce a version of the chain rule from Theorem 1. For the discrete case, this has been done by Fehr and Berens in ( [30], Theorem 3). If η is a probability measure, If we relax the condition on η to be a measure under which P Y is absolutely continuous and supported on a set of finite measure, we obtain h γ α (X|Y) ≥ h γ⊗η α (X, Y) − h η 0 (Y). Since this inequality trivially holds true when Y is supported on a set of infinite η-measure, we have proved the following inequality that (although weaker being an inequality rather an identity) is reminiscent of the chain rule for Shannon entropy.

Proof. Recall that
where the outer integral can be restricted to the support of P Y , which we will keep emphasizing in the first few steps.
By Jensen's inequality, when α > 1, Note that the above calculation also holds when α ∈ (0, 1) because even though Jensen's inequality is flipped because 1 α is now convex, the inequality flips again, now to the desired side, since the exponent α α−1 is negative. Taking logarithms and hitting both sides with a minus sign concludes the proof. Remark 6. These inequalities are tight. Equality is attained when X, Y are independent and Y is uniformly distributed on finite support.

Sensitivity to Reference Measure
Proof. Then the joint density of (X, Y) under the measure µ ⊗ µ becomes F(x, y)ψ(x)ψ(y) if the joint density was F(x, y) under γ. Now suppose α ≥ 1 then, If α ∈ (0, 1), the same inequality holds if µ is also absolutely continuous w.r.t. γ.

Notions of α-Mutual Information
Arimoto [39] used his conditional Rényi entropy to define a mutual information that we extend to the general setting as follows.
Definition 5. Let X be an S-valued random variable and let Y be a T-valued random variable, with a given joint distribution. Then, we define We use the squiggly arrow to emphasize the lack of symmetry in X and Y, but nonetheless to distinguish from the notation for directed mutual information, which is usually written with a straight arrow. By Corollary 2, for α ∈ (0, ∞), I α (X Y), for any choice of reference measure γ, can be seen as a measure of dependence between X and Y.
Let us discuss a little further the validity of I (γ) α (X Y) as a dependence measure. If the conditional distributions are denoted by P X|Y=y as in Remark 1, using the fact that h ν α (Z) = −D α (Z ν) for any random variable Z, we have for any α ∈ (0, 1) ∪ (1, ∞) that Furthermore, when α ∈ (0, 1), by ([29], Proposition 2), we may also write Note that Rényi divergence is convex in the second argument (see [29], Theorem 12) when α ∈ (0, 1), and the last equation suggests that Arimoto's mutual information can be seen as a quantification of this convexity gap.
One can also see clearly from the above expressions why this quantity controls, at least for α ∈ (0, 1), the dependence between X and Y: indeed, one has for any α ∈ (0, 1) and any t > 0 that, where the inequality comes from Markov's inequality, and we use β = 1−α α . Thus, when I (γ) α (X Y) is large, the probability that the conditional distributions of X given Y cluster at around the same "Rényi divergence" distance from the reference measure γ as the unconditional distribution of X (which is of course a mixture of the conditional distributions) is small, suggesting a significant "spread" of the conditional distributions and therefore strong dependence. This is illustrated in Figure 1. Thus, despite the dependence of I (γ) α (X Y) on the reference measure γ, it does guarantee strong dependence when it is large (at least for α < 1). When α → 1 − we have β → 0, and consequently the upper bound e βt e −βI (γ) α (X Y) → 1 making the inequality trivial.
γ P X D α (P X γ) +t , for a fixed α < 1, demonstrates strong dependence between X and Y: the space depicted is the space of probability measures on S, including γ, P X , and the red dots representing the conditional distributions of X given that Y takes different values in T. The D α -balls around γ are represented by ellipses to emphasize that the geometry is non-Euclidean and in fact, non-metric. When I (γ) α (X Y) is large, there is a significant probability that Y takes values such that the corresponding conditional distributions of X lie outside the larger D α -ball, and therefore far from the (unconditional) distribution P X of X.

2.
The Augustin-Csiszár α-mutual information is defined as

3.
Sibson's α-mutual information is defined as The quantity J α was recently introduced by Lapidoth and Pfister as a measure of independence in [45] (cf., [25,27,32]). The Augustin-Csiszár mutual information was originally introduced in [40] by Udo Augustin with a slightly different parametrization, and gained much popularity following Csiszár's work in [21]. For a discussion on early work on this quantity and applications also see [42] and references therein. Both [40] and [42] treat abstract alphabets however the former is limited to α ∈ (0, 1) while the latter treats all α ∈ (0, ∞). Sibson's definition originates in [38] where he introduces I α in the form of information radius (see, e.g, [33]), which is often written in terms of Gallager's function (from [46]). Since all the quantities in the above definition are stated in terms of Rényi divergences not involving the reference measure γ, they themselves are independent of the reference measure. Their relationship with the Rényi divergence also shows that all of them are non-negative. Moreover, putting µ = P X , ν = P Y in the expression for J α and µ = P Y in expressions for K α and I α when X, Y are independent shows that they all vanish under independence.
While these notions of mutual information are certainly not equal to I (γ) α in general when α = 1, they do have a direct relationship with conditional Rényi entropies, by varying the reference measure. Since , where all optimizations are done over probability measures, we can write Lapidoth and Pfister's mutual information as Note that it is symmetric by definition: J α (X; Y) = J α (Y; X), which is why we do not use squiggly arrows to denote it. By writing down Rényi divergence as Rényi entropy w.r.t. reference measure, Augustin-Csiszár's K α can be recast in a similar form, this time using the average Rényi entropy of the conditionals instead of Arimoto's conditional Rényi entropy, In light of Theorem 1, Sibson's mutual information can clearly be written in terms of conditional Rényi entropy as . This leads to the observation that Sibson's mutual information can be seen as a special case of Arimoto's mutual information, when the reference measure is taken to be the distribution of X: For the sake of comparision with the corresponding expression for I The following inequality, which relates the three families when α ≥ 1, turns out to be quite fruitful.
Theorem 3 allows us to extend ( [31], Theorem 5) to include the capacity based on the Lapidoth-Pfister mutual information.
Theorem 4. Let α ≥ 1, and fix a channel W from X to Y. Then, Proof. Theorem 3 implies that, It was shown by Csiszár [21] in the finite alphabet setting (in fact, he showed this for all α > 0) that C X Y I,α (W) = C X Y K,α (W). Nakiboglu demonstrates C X Y I,α (W) = S α (W) in [36] and C X Y K,α (W) = S α (W) in [42] for abstract alphabets. Putting all this together, we have Finally, using the symmetry of J α and Theorem 3 in a similar fashion again, we get This completes the proof.
The two inequalities in the last theorem cannot be improved to be equalities, this follows from a counter-example communicated to the authors by C. Pfister. Note that this theorem corrects Theorem V.1 in [41].
Since J α (X; Y) is no longer sandwiched between K X Y α and I X Y α when α ∈ (0, 1) the same argument cannot be used to deduce the equality of various capacities in this case. However, when α ∈ [1/2, 1), a direct demonstration proves that the Lapidoth-Pfister capacity of a channel equals Rényi radius when the state spaces are finite. Theorem 5. Let α ∈ [ 1 2 , 1), and fix a channel W from X to Y where X and Y take values in finite sets S and T respectively. Then, sup Proof. We continue using integral notation instead of summation. Note that, where β = α α−1 . We consider the function f (P, µ) = β log S e 1 β D α (W(x) µ) dP(x) defined on P (S) × P (T). Observe that the function g(P, µ) = −e 1 β f (P,µ) = − S e 1 β D α (W(x) µ) dP(x) has the same minimax properties as f . We make the following observations about this function.
• g is linear in P. • g is convex in µ. Follows from the proof in ( [27], Lemma 17).
• g is continuous in each of the variables P and µ. Continuity in µ follows from continuity of D α in the second coordinate (see, for example, in [29]) whereas continuity in P is a consequence of linearity of the integral (summation).
The above observations ensure that we can apply von Neumann's convex minimax theorem to g, and therefore to f to conclude that sup P J α (P, W) = sup P min µ f (P, µ) = min µ sup P f (P, µ).
For a fixed µ however, sup P f (P, µ) = sup P β log e 1 β D α (W(x) µ) dP(x) = sup x D α (W(x) µ) (the RHS is clearly bigger than the LHS, for the other direction use measures P = δ x n where x n is a supremum achieving sequence for the RHS). This shows that when α ≥ 1/2 the capacity coming from the J α (X; Y) equals the Rényi radius if the state spaces are finite.
Though we do not treat capacities coming from Arimoto's mutual information in this paper due to its dependence on a reference measure, a remark can be made in this regard following B. Nakiboglu's [43] observation that Arimoto's mutual information w.r.t. γ of a joint distribution (X, Y) can be written as a Sibson mutual information of some input probability measure P and the channel W from X to Y corresponding to P X,Y . Let X, Y denote the marginals of (X, Y). As before there are reference measures γ on the state space S of X. Let P denote the probability measure on S with density dP dγ = e (1−α)D α (P X γ) dP X dγ α . Then a calculation shows that Therefore, it follows that if a reference measure γ is fixed, then the capacity of order α of a channel W calculated from Arimoto's mutual information will be less than the capacity based on Sibson mutual information (which equals the Rényi radius of W).