Additivity of entropic uncertainty relations

We consider the uncertainty between two pairs of local projective measurements performed on a multipartite system. We show that the optimal bound in any linear uncertainty relation, formulated in terms of the Shannon entropy, is additive. This directly implies, against naive intuition, that the minimal entropic uncertainty can always be realized by fully separable states. Hence, in contradiction to proposals by other authors, no entanglement witness can be constructed solely by comparing the attainable uncertainties of entangled and separable states. However, our result gives rise to a huge simplification for computing global uncertainty bounds as they now can be deduced from local ones. Furthermore, we provide the natural generalization of the Maassen and Uffink inequality for linear uncertainty relations with arbitrary positive coefficients.


Introduction
Uncertainty and entanglement are doubtless two of the most prominent and drastic properties that set apart quantum physics from a classical view on the world.Their interplay contains a rich structure, which is neither sufficiently understood nor fully discovered.In this work, we reveal a new aspect of this structure: the additivity of entropic uncertainty relations.
For product measurements in a multipartition, we show that the optimal bound c ABC... in a linear uncertainty relation satisfies where c A , c B , c C , . . .are bounds that only depend on local measurements.This result implies that minimal uncertainty for product measurements can always be realized by uncorrelated states.Hence, we have an example for a task which is not improved by the use of entanglement.We will quantify the uncertainty of a measurement by the Shannon entropy of its outcome distribution.For this case, the corresponding linear uncertainty bound c ABC... gives the central estimate in many applications like: entropic steering witnesses [1][2][3][4], uncertainty relations with side-information [5], some security proofs [6] and many more.
When speaking about uncertainty, we consider so called preparation uncertainty relations [7][8][9][10][11][12][13][14].From an operational point of view, a preparation uncertainty describes fundamental limitations, i.e. a tradeoff, on the certainty of predicting outcomes of several measurements that are performed on instances of the same state.This should not be confused [15] with its operational counterpart named measurement uncertainty [16][17][18][19][20].A measurement uncertainty relation describes the ability of producing a measurement device which approximates several incompatible measurement devices in one shot.
The calculations in this work focus on uncertainty relations in a bipartite setting.However, all results can easily be generalized to a multipartite setting by an iteration of statements on bipartitions.The basic measurement setting, which we consider for bipartitions, is depicted in Fig. 1.We consider a pair Basic setting of product measurements on a bipartition: pairs of measurements XA, XB or YA, YB are applied to a joint state ρAB at the respective sides of a bipartition.One bit of information is transmitted for communicating whether the X or the Y measurements are performed.The weights (λ, µ) denote the probabilities corresponding to this choice.
of measurements, X AB = X A X B and Y AB = Y A Y B , to which we will refer as the global measurements of (tensor) product form.Each of those global measurements of product form is implemented by applying local measurements at the respective sides of a bipartition between parties denoted by A and B. Hereby, the variables X A , X B and Y A , Y B will refer to those local measurements applied to the respective sides.
We only consider projective measurements, but beside this we impose no further restrictions on the individual measurements.So the only property that measurements like X A and X B have to share is the common label 'X', besides this, they could be noncommuting or even defined on Hilbert spaces with different dimensions.
The main result of this work is stated in Prop.1 in Sec. 3. In that section, we also collect some remarks on possible and impossible generalizations and the construction of entanglement witnesses.The proof of Prop.1 is placed at the end of this paper, as it relies on two basic theorems stated in Sec.4 and Sec.5.
Thm.1, in Sec.4,clarifies and expands the known connection between the logarithm of (p, q)-norms and entropic uncertainty relations.As a special case of this theorem we obtain Lem.1 which states the natural generalization of the well known Maassen and Uffink bound [21] to weighted uncertainty relations.Thm.2, in Sec.5, states that (p, q)-norms, in a certain parameter range, are multiplicative, which at the end leads to the desired statement on the additivity of uncertainty relations.
Before stating the main result, we collect, in Sec.1, some general observations on the behavior of uncertainty relations for product measurements with respect to different classes of correlated states.Furthermore, in Sec. 2, we will motivate and explain the explicit form of linear uncertainty relations used in this work.

Uncertainty in bipartitions
All uncertainty relations considered is this paper are state-independent.In practice, finding a stateindependent relation leads to the problem of jointly minimizing a tuple of given uncertainty measures, here the Shannon entropy of X AB and Y AB , over all states.This minimum, or a lower bound on it, then gives the aforementioned trade-off, which then allows to formulate statements like: "whenever the uncertainty of X AB is small, the uncertainty of Y AB has to be bigger than some state-independent constant" .
Considering the measured state, ρ AB , it is natural to distinguish between the three classes: uncorrelated, classically correlated and non-classical correlated.In regard of the uncertainty in a corresponding global measurement, states in these classes share some common features: If the measured state is uncorrelated, i.e a product state ρ AB = ρ A ⊗ ρ B , the outcomes of the local measurements are uncorrelated as well.Hence, the uncertainty of a global measurement is completely determined by the uncertainty of the local measurements on the respective local states ρ A and ρ B .Moreover, in our case, the additivity of the Shannon en-tropy, tells us that the uncertainty of a global measurement is simply the sum of the uncertainty of the local ones.In the same way any trade-off on the global uncertainties can be deduced from local ones.
If the measured state is classically correlated, i.e a convex combination of product states [22], additivity of local uncertainties does not longer hold.More generally, whenever we consider a concave uncertainty measure [23], like the Shannon entropy, the global uncertainty of a single global measurement is smaller than the sum of the local uncertainties.Intuitively this makes sense because a correlation allows to deduce information on the potential measurement outcomes of one side given a particular measurement outcome on the other.However, a linear uncertainty relation for a pair of global measurements is not affected by this, i.e a trade-off will again be saturated by product states.This is because the uncertainty relation between two measurements, restricted to some convex set of states, will always be attained on an extreme point of this set.
However, if measurements are applied to an entangled state, more precisely to a state which shows EPR-steering [24][25][26] with respect to the measurements X AB and Y AB , it is in general not clear how a trade-off between global uncertainties relates to the corresponding trade-off between local ones.Just have in mind that steering implies the absence of any local state model, which is usually proven by showing that any such model would violate a local uncertainty relation.
In principle one would expect to obtain smaller uncertainty bounds by also considering entangled states, and there are many entanglement witnesses known based on this idea (see also Rem. 3 in the following section).

Linear uncertainty relations
We note that there are many uncertainty measures, most prominently variances [8,10].Variance, and similar constructed measures [17,27], describe the deviation from a mean value, which clearly demands to assign a metric structure to the set of measurement outcomes.From a physicist's perspective this makes sense in many situations [11] but can also cause strange behaviours in situations where this metric structure has to be imposed artificially [28].However, from the perspective of information theory, this seems to be an unnecessary dependency.Especially when uncertainties with respect to multipartitions are considered, it is not clear at all how such a metric should be constructed.Hence, it can be dropped and a quantity that only depends on probability distributions of measurement outcomes has to be used.We will use the Shannon entropy.It fulfills the above requirement, does not change when the labeling of the measurement outcomes are permuted, and has a clear op-erational interpretation [29,30].Remarkably, Claude Shannon himself used the term 'uncertainty' as an intuitive paraphrase for the quantity today known as 'entropy' [29].Historically, the decision to call the Shannon entropy an 'entropy' goes back to a suggestion John von Neumann gave to Shannon, when he was visiting Weyl in 1940 (there are, at least, three versions of this anecdote known [31], the most popular is [32]).
Because we are not interested in assigning values to measurement outcomes, a measurement, say X, is sufficiently described by its POVM elements, {X i }.So, given a state ρ, the probability of obtaining the i-th outcome is computed by tr(ρX i ).The respective probability distribution of all outcomes is denoted by the vector p x ρ .Within this notation the Shannon entropy of a X measurement is given by H(X|ρ) := − i p x ρ i log p x ρ i .As we restrict ourselves to non-degenerate projective measurements, all necessary information on a pair of measurements, X and Y , is captured by a unitary U that links the measurement basis.We will use the convention to write U as transformation from the {X i } to the {Y i }-basis, i.e. we will take U such that Y i = U X i U † holds.
Our basic objects of interest are optimal, stateindependent and linear relations.This is, for fixed weights λ, µ ∈ R + we are interested in the best constant c(λ, µ) for which the linear inequality holds on all states ρ.Such a relation has two common interpretations: On one hand one can consider a guessing game, see also [33].On the other, a relation like (2) can be interpreted geometrically as in Fig. 2.
Linear uncertainty: a guessing game For the moment, consider a player, called Eve, who plays against an opponent, called Alice.Dependent on a coin throw, in each round, Alice performs measurement X A or Y A on a local quantum state.Thereby the weights λ and µ are the weights of the coin and the l.h.s. of (2) describes the total uncertainty Eve has on Alice's outcomes in each round.To be more precise, up to a (λ, µ)-dependent constant, the l.h.s of (2) equals the Shannon entropy of the outcome distribution λp Eve's role in this game is to first choose a state ρ, observe the coin throw, wait for the measurements to be performed by Alice, and then ask binary questions to her opponent in order to get certainty on the outcomes.Thereby, the Shannon entropy sum on the l.h.s of (2) (with logarithm to the base 2) equals the expected amount questions Eve has to ask using an optimal strategy based on a fixed ρ.Hence, the value c(λ, µ) denotes the minimal amount of expected questions, attainable by choosing an optimal ρ.
For a bipartite setting, Fig. 1, a second player, say Bob, joins the game.Here, Eve will play the above game against Alice and Bob, simultaneously.Thereby, Alice and Bob share a common coin, and, therefore, apply measurements with the same labels (X AB or Y AB ).The obvious question that arises in this context is if Eve gets an advantage in this simultaneous game by using an entangled state or not.Prop. 1 in the next section answers the above question negatively, which is somehow unexpected as in principle the possible usage of non-classical correlations enlarges Eve's strategies.For example: Eve could have used a maximally entangled state, adjusted such that all measurements Alice and Bob perform are maximally correlated.In this case the remaining uncertainty Eve has, would only be the uncertainty on the outcomes of one of the parties.However, the marginals of a maximally entangled state are maximally mixed.Hence, Eve still has a serious amount of uncertainty (log d), which turns out to be not small enough for beating a strategy based on minimizing the uncertainty of the local measurements individually.For the case of product-MUBs in prime square dimension [34], it turns out that the minimal uncertainty realizable by a maximally entangled state actually equals the optimal bound.

Linear uncertainty: the positive convex hull
The second interpretation comes from considering the set of all attainable uncertainty pairs, the so called uncertainty set In principle this set contains all information on the uncertainty trade-off between two measurements.More precisely, the white space in the lower-left corner of a diagram like Fig. 2 indicates that both uncertain-ties cannot be small simultaneously.In this context, a state-independent uncertainty gives a quantitative description of this white space.Unfortunately, it turns out that computing U can be very hard, because the whole state-space has to be considered.Here a linear inequality, like (2), gives an outer approximation of this set.More precisely, if c(λ, µ) is the optimal constant in (2), this inequality describes a halfspace bounded from the lower-left by a tangent on U.This tangent has the slope µ/λ.The points on which this tangent touches the boundary of U corresponds to states which realize equality in (2).Those states are called minimal-uncertainty states.Given all those tangents, i.e. c(λ, µ) for all positive (λ, µ), we can intersect all corresponding halfspaces and get a convex set which we call the positive convex hull of U, denoted by U in the following.Geometrically, the positive convex hull can be constructed by taking the convex hull of U and adding to it all points that have bigger uncertainties then, at least, some point in U.
If U is convex, like in the example above, U contains the full information on the relevant parts of U. If U is not convex, U still gives a variety of state independent uncertainty relations, but there is still place for finding improvements, see [34].

Additivity, implications and applications
We are now able to state our main result Proposition 1 (Additivity of linear uncertainty relations).Let c A (λ, µ) and c B (λ, µ) be state-independent lower bounds on the linear entropic uncertainty for local measurements X A , X B and Y A , Y B , with weights (λ, µ).This means we have that holds on any state ρ A from B(H A ) and ρ B from B(H B ).Let X AB and Y AB be the joint global measurements that arise from locally performing X A , X B and Y A , Y B respectively.Then holds for all states ρ AB from B(H A ⊗ H B ). Furthermore, if c A and c B are optimal bounds, then is the optimal bound in (5), i.e. linear entropic uncertainty relations are additive.
The proof of this proposition is placed at the end of Sec. 5. We will proceed this section by collecting some remarks related to the above proposition: Remark 1 (Product states).Assume that c A (λ, µ) and c B (λ, µ) are optimal constants, and φ A and φ B are the states that saturate the corresponding uncertainty relations (4).Then the product state φ AB := φ A ⊗ φ B saturates (5), due to the additivity of the Shannon-entropy.However, this does not imply that all states that saturate (4) have to be product states.Examples for this, involving MUBs of product form, are provided in [34].
Remark 2 (Minkowski sums of uncertainty regions).Prop. 1 shows how the uncertainty set U AB , of the product measurement, relates to the uncertainty sets U A and U B of corresponding local measurements: For the case of an optimal c AB (λ, µ), and fixed (λ, µ), equality in (5) can always be realized by product states (see Rem. 1).In an uncertainty diagram, like Fig. 3, those states correspond to points on the lowerleft boundary of an uncertainty set, and, in general, they produce the finite extreme points of the positive convex hull of an uncertainty set.For product states we have the additivity of the Shannon entropy, which gives This implies that we can get every extreme point of U AB by taking the sum of two extreme points of U A and U B .Due to convexity the same holds for all points in U AB and we can get this set as Minkowski sum [35].
For convex uncertainty regions, arising from local measurements, this is depicted in Fig. 3.For this example, it is also true that U AB itself is given as Minkowski sum of local uncertainty sets.However, we have to note, this behavior cannot be concluded from Prop. 1 alone.

Remark 3 (Relation to existing entanglement witnesses).
A well know method for constructing nonlinear entanglement witnesses is based on computing the minimal value of a functional, like the sum of uncertainties [36][37][38], attainable on separable states.Given an unknown quantum state, the value of this functional is measured.If the measured value undergoes the limit set by separable states, the presence of entanglement is witnessed.For uncertainty relations based on the sum of general Schur concave functionals this method was proposed in [4], including Shannon entropy, i.e. the l.h.s. of (5), as central example.
Our result Prop. 1 shows that this method will not work for Shannon entropies, because there is no entangled state that undergoes the limit set by separable states.We note that there is no mathematical contradiction between Prop. 1 and [4].We only show that the set of examples for the method proposed in [4] is empty.
For uncertainty relations in terms of Shannon, Tsallis and Renyi entropies a similar procedure for constructing witnesses was proposed by [37,39].Here explicit examples for states, that can be witnessed to be entangled, were provided.Again, our proposition Prop. 1 is not in contradiction to this work because in [37,39] observables with a non-local degeneracy where considered.
The generalization of Prop. 1 to three measurements, say X AB , Y AB and Z AB , fails in general.The following counterexample was provided by O. Gühne [40]: For both parties we consider local measurements deduced from the three Pauli operators on a qubit and take all weights equal to one.In short hand notation we write In this case, the minimal local uncertainty sum is attained on eigenstates of the Pauli operators.If such a state is measured, the entropy for one of the measurements is zero and maximal for the others.Hence, the local uncertainty sum is always bigger than 2 [bit].Therefore we have for all product states.In contrast to this a Bell state, say Ψ − , will give the entropy of 1[bit], for all above measurements.Hence we have, 4 Lower bounds from (p, q)-norms The quite standard technique for analyzing a linear uncertainty relation is to connect it to the (p, q)norm (see (12) below) of the basis transformation U .Thereby, the majority of previous works in this field is concentrating only on handling the case of equal weights λ = µ = 1, which is connected to the (p, q)norm for the case 1/p + 1/q = 1.However, for the purpose of this work, i.e. for proving Prop. 1, we have to extend this connection to arbitrary (λ, µ).We will do this by Thm. 1 on the next page.
A historically important example for the use of the connection between (p, q)-norms and entropic uncertainties, is provided by Bialynicki-Birula and Mycielski [41].They used Beckner's result [42], who computed the (p, q)-norm of the Fourier-Transfromation, for proving the corresponding uncertainty relation, between position and momentum, by Hirschmann [43].Also Maassen and Uffink [21] took this way for proving their famous relation.Our result gives a direct generalization of this, meaning we will recover the Maassen and Uffink relation at the end of this section as special case of (50).Albeit, before stating our result, we will start this section by shortly reviewing the previously known way for connecting (p, q)-norms with linear uncertainty relation, see also [44,45] for further details: The (p, q)-norm, i.e the l p → l q operator norm, of a basis transformation U is given by Here, the limit of U q,p for (p, q) → (2, 2) goes to 1.However, when p and q are fixed on the curve 1/p + 1/q = 1, the leading order of U q,p around (p, q) = (2, 2) recovers the uncertainty relation (2) in the case of equal weights λ = µ = 1/2, see [41,43].More precisely, taking the negative logarithm of (12) gives Here, we can identify the squared modulus of the components of φ as probabilities of the X and Y measurement outcomes and substitute By this, (13) gives a linear relation in terms of the α-Renyi entropy [46], H α (p) = α 1−α log( p α ).Here we get If we evaluate this on the curve 1/p + 1/q = 1, for p ≤ 2 ≤ q, we can use which can be employed to (16), in order to get Here, the limit (p, q) → (2, 2), in the l.h.s of (18), gives the limit from the Renyi to the Shannon entropy.This gives the l.h.s. of the uncertainty relation (2) for λ = µ = 1.Hence, the functional dependence of U q,p on (p, q) in the limit (p, q) → (2, 2) gives the optimal bound c(1, 1), in (2).For the case of the L 2 (R)-Fourier transformation the norm U F q,p = p 1/p / q 1/q was computed by Beckner [42], leading to c(1, 1) = log(πe).However, to the best of our knowledge, computing U q,p , for general U and (p, q), is an outstanding problem, and presumably very hard [47,48].Albeit, for special choices of (p, q) this problem gets treatable, see [49] for a list of those.The known cases include p = q = 2, p = ∞ or q = ∞ such as p = 1 or q = 1.
The central idea of Maassen's and Uffink's work [21] is to show that the easy case of (p = 1, q = ∞), here we have U 1,∞ = max ij |U ij |, gives a lower bound on c(1, 1).More precisely, they show that, for 1 ≤ p ≤ 2 and on the line 1/p + 1/q = 1, the r.h.s. of (18) approaches c(1, 1) from below.Note that this is far from being obvious.Explicitly, for p ≤ 2 ≤ q we have H q/2 (Y |φ) ≥ H(Y |φ) and H p/2 (X|φ) ≤ H(X|φ), so one term approaches the limit from above and the other approaches the limit from below.Whereas Maassen and Uffink showed, using the Riesz-Thorin interpolation [50,51], that the inf φ of the sum of both approaches the limit from below.
The following Theorem, Thm.1, extends the above to the case of arbitrary (λ, µ).Notably, we have to take (p, q) from curves with 1/p + 1/q = 1, those are depicted in Fig. 5.In contrast to Maassen and Uffink, the central inequality we use is the ∞-norm versions of the Golden Thompson inequality (see [52][53][54] and the blog of T.Tao [55] for a proof and related discussions).
Theorem 1.Let c(λ, µ), with λ, µ ∈ R + , be the optimal constant in the linear weighted entropic uncertainty relation Then: x † U y (20) and denotes the unit r-norm Ball on Ω.
(ii) For λ, µ ≤ N/2 we can write (iii) For µ, λ ∈ R + \{0}, we have Proof.The starting point of this proof is a modification of a technique, used by Frank and Lieb in [56], for reproving the Maassen and Uffink bound (see also the talk of Hans Maassen [44], for a finite dimensional version).For probability distributions p, q ∈ B 1 (R d + ) we define the operators such that we can rewrite the Shannon entropy as Based on this, we can further rewrite the Shannon entropy as an optimization over a linear function in ρ by using the positivity of the relative entropy, i.e. we have D(p||q) = p i log(p i ) − p i log(q i ) ≥ 0, which implies − p i log(q i ) ≥ H(p).We obtain such as the respective statement for H (Y |ρ) and B(q).If we employ this rewriting to c(λ, µ), we obtain the minimal entropy sum as a minimization over a parametrized eigenvalue problem, namely tr (ρ (λA(p) + µB(q))) (29) Now we will turn the minimization, over ρ, into a maximization by applying the convex function e −x/N , with N ≥ 1, to the weighted sum of A and B. This will map the smallest eigenvalue of λA + µB to the largest of e − λA(p)+µB(q) N and so on.In order to get back the correct value of c we will have to apply the inverse function, −N log(x), afterwards.We get c(λ, µ) = −N log sup p,q,ρ tr ρe − λA(p)+µB(q) N .(30) Due to the positivity of the operator exponential, i.e.A and B are hermitian, the optimization over ρ is equivalent to the Schatten-∞ norm.We have At this point we apply the Golden-Thompson inequality e S+T  p ≤ e S e T p (32) and expand the resulting exponentials, as well as the Schatten norm.We get = -N log Now we substitute p λ/N i =: χ i and q µ/N j =: ξ j , and expand |x = x i |e i and |y = y j |f j , with component vectors x, y ∈ B 2 (C d ).By this the r.h.s of (35) becomes Here we can identify e i |f j = U ij , i.e. the overlaps are the components of U when represented in the basis X.At this point, it is straightforward to check that . Using the gener-alized Hölder inequality we can fuse some of the maximizations above as follows: On one hand, we have and which means that the vectors v and w, with v i = χ i x i and w j = ξ j y j , are in B r (C) and B s (C) respectively.On the other hand, the converse is also true, i.e. every v and w from B r (C d ) and B s (C d ) can be realized by suitable choices of x, χ and y, ξ.For example, we can always set for getting x and χ from v, componentwise.For this particular choice we can check that holds, such that we will get back v.
If we use the above in (36), we can replace sup x,χ by sup v and sup y,ξ by sup w , in order to get the statement (i) with For showing the statement (ii), we take r , with 1 = 1/r + 1/r .If λ ≤ N/2 holds we have r ≥ 0 and we can use the tightness of the Hölder inequality to rewrite i.e. the maximization over B r gives the dual norm of r.Substituting w by φ = w φ s then gives Here the analogous rewriting applies with s given by 1 = 1/s + 1/s , if µ ≤ λ/2 holds.

c = lim
it suffices to expand all exponentials in (31) and (33) up to the first order in N .On this order the Golden-Thomson inequality is a equality.
Remark 5 (The Maassen and Uffink bound).For the case of N = 2 and λ = µ = 1, in Thm.1, we get s = r = 1 and s = r = ∞.Hence, we recover the Maassen-Uffink bound [21].Explicitly, we have Here we used that x † U y is convex in x and y.Hence, sup x,y is attained at the extreme points of B 1 (C d ).
Up to a phase, those extreme points have the form (0, • • • , 0, 1, 0 • • • , 0) , i.e. they have their support only on a single site.So, choosing x and y, with support on the i − th and j − th site, will give Remark 6 (Renyi-Entropies).Alternatively, the bound obtained in Thm. 1 can be expressed in terms of Renyi-entropies: Using statement (i), (ii) and (iii) together directly gives Here a straightforward computation shows So, when we proceed as in (13), substituting the Renyi entropy in (47) gives Lemma 1 (Generalization of the Maassen and Uffink bound).Let u i denote the i-th column of the basis transformation U that links the measurements X and Y .Then, for 1 ≥ λ ≥ µ ≥ 0 and all states ρ we have Note that for the case 1 ≥ µ ≥ λ ≥ 0 the same holds, if U is replaced by U † , i.e. by the transformation between Y and X.
Proof.The linear uncertainty bound c(λ, µ) is homogeneous in (λ, µ).Hence, we can consider We will apply Thm. 1, with N = 2, in order to get a lower bound.Here, we have s = 2 1+µ/λ and Here the second equality stems from the same argumentation as in Rem. 5.The sup over B s (C d ) on the most right (53), gives the norm dual to s, given by t = 2 1−µ/λ .All in all we have, Remark 7 (More than two observables).As mentioned in Sec. 3, the proposition Prop. 1 does not generalize to three measurements.A reasoning, or at least a hint, for this can be found by carefully following the proof of Thm. 1.In principle, the ansatz in ( 29) can be generalized to more than two measurements as well, and all following steps work out in a similar way, up to (33).Here the Golden-Thompson inequality was used.It is well known, that the direct generalization of this inequality to three operators fails to hold.Hence, the technique of our proof cannot be generalized for this case.We note that there is an ongoing work of exploring more sophisticated generalizations of this inequality [57][58][59][60].However, we leave relating this to entropic uncertainty for future work.
5 Additivity of bounds from multiplicativity of (p, q)-norms In this section we will provide the proof of Prop. 1, i.e. the additivity of linear uncertainty relations.By using Thm.1 from the section before we can formulate the linear uncertainty in terms of the logarithm of a (p, q)norm.At this point, it is straightforward to check that the additivity of the linear uncertainty is equivalent to the multiplicativity of the (p, q)-norm.In fact, the following theorem Thm.2 provides that, for p and q coming from the correct range: The (p, q) norm of a transformation which admits a product form Theorem 2 (Global bounds from local bounds).Let X AB and Y AB be tensor-product bases of a Hilbert space holds with η A η B = η AB as optimal constant.
Proof.We note that a related result, for pointwise positive maps between Lebesque spaces, was discovered by Grey and Sinnamon [61].
The basic object of this proof will be the p⊗q-norm which will be defined immediately.The basic work of this proof is devoted to show some properties of this norm from which the statement directly follows.
Let |φ ∈ H with components φ = {φ ij } sorted within the product base X AB by φ ij = φ|e A i ⊗ e B j and consider the norm This norm shares the following properties (i) with Fφ 1 ⊗ φ 2 = φ 2 ⊗ φ 1 and p ≤ q.
We will show the validity of (i − iii) in a moment.First notice that, if (i − iii) are valid we can easily conclude Furthermore, if we consider states that realize equality in (55), i.e. states that belong to optimal η A and η B .The tensor-product of two of those states will realize, due to multiplicativity of the p-norm, equality in (56) as well.Hence, (61) will prove the main statement of this Theorem.
Property (i) follows directly by plugging p = q in the definition of the p ⊗ q norm, here is nothing more to prove.The property (ii) follows by expressing I⊗V as δ ik V jl in X-Basis and As a last step, (iii) is a direct consequence of Minkowski's inequality / l p -triangle inequality (see [62]), i.e. if p ≥ 1 : So, if 1 ≤ q/p we can use this inequality as follows (67) and show the validity of (iii).
Lemma 2 (Multiplicativity of the (p, q)-norm).For 1 ≤ p ≤ q, the (p, q)-norm of a product unitary Proof.This is a direct consequence of Thm. 2. Using the definition of the (p, q)-norm we can parse η A = ||U A || q,p , η B = ||U B || q,p and η AB = ||U AB || q,p , if we consider η A , η B and η AB to be optimal bounds.

Proof of Prop. 1
Proof.For proving Prop. 1 it suffices to proof the additivity of the optimal case, i.e. we will consider c A , c B and c AB to already be constants for the best linear uncertainty bound.If the additivity holds we can directly conclude that the sum of lower bounds on c A and c B gives a valid lower bound on c AB as well.
Given measurements X AB and Y AB , specified by a product unitary U AB = U A ⊗ U B , we use Thm. 1 to rewrite c A , c B and c AB as the limit of logarithms of (p, q)-norms.We assume λ ≤ µ both to be finite and N to be sufficiently large such that we can use Thm. 1 part (ii) (here we needed λ, µ ≤ N/2), and get

Outlook and conclusion
In this work we showed that linear uncertainty relations between product type measurements in multipartions are additive.Prop. 1 gives some clear structure to the problem of computing entropic uncertainty bounds.Especially in the context of quantum-coding in cryptography, this result might turn out to be useful, because now it is possible to compute uncertainty bounds in the limit of infinite system sizes for blockcoding schemes [6,63,64].The generalization of the Maassen and Uffink bound for arbitrary weights (λ, µ), provided in Lem. 1, can also be directly employed in a multipartite setting in order to obtain valid state-independent uncertainty relations for this case.However, this bound is easy computable, it is only a lower bound and presumably only tight in high symmetrical cases (see [34] for a characterization of tightness for the usual Maassen and Uffink bound).The more general problem of providing a 'good' method for computing the optimal bound c AB remains open.We note that there are only few and special cases, including angular momentum and mutual unbiased bases, where this optimal bound is actually known.Thereby, the cases where the optimal bound can be computed analytically are even fewer [34,65,66] and the known numerical methods only work for very small dimensional problems [67].
Here the proof of Thm. 1 might give a new ansatz for better numerics.Explicitly, the minimization in (29) and maximization in (42) are giving rise to apply the method of alternating minimization.
In Sec. 4 we presented extension to the known connections between the logarithm of (p, q)-norms and linear uncertainty relations in terms of the shannon entropy.However, the technique used seems to apply only for the special case we considered.An adaption of this technique to sets of more than two local measurements is not possible without major modifications.As mentioned in Rem. 7, this would require to incorporate generalizations of the Golden-Thompson inequality which seems to be a fruitful topic for future work.The technique from the proof of Thm. 1 might also fail if general POVMs instead of projective measurements are considered.Moreover, it is not clear if Prop. 1 will hold in this case.A third generalization, that does not hold, arises by considering arbitrary Schur-concave functions.Here, the natural question is to ask if at least any entanglement witness can be constructed.A very recent result [68] shows that such witnesses, in can be constructed from Tsallis entropies.

Figure 1 :
Figure1: Basic setting of product measurements on a bipartition: pairs of measurements XA, XB or YA, YB are applied to a joint state ρAB at the respective sides of a bipartition.One bit of information is transmitted for communicating whether the X or the Y measurements are performed.The weights (λ, µ) denote the probabilities corresponding to this choice.

Figure 2 :
Figure 2: Uncertainty set for measurements performed on a qubit.Any linear uncertainty relation, (2), with weights (λ, µ), gives the description of a tangent to the uncertainty set.All attainable pairs of entropies lie above this tangent.

Figure 3 :
Figure 3: Uncertainty sets of local measurements can be combined by the Minkowski sum: Uncertainty sets (green and yellow) for two pairs of local measurements on Qubits and the uncertainty set of the corresponding global measurements (blue).

Figure 4 :
Figure 4: Multiparite setting: Additivity of entropic uncertainty relations also holds if a pair of global product measurements for many local parties is considered.

Figure 5 :
Figure5: Evaluating U r ,s on the depicted curves gives a lower bound for c(λ, µ), (see Thm. 1).Because c(λ, µ) is a linear bound it is 1-homogenious in (λ, µ).Hence all information on the optimal bound c(λ, µ) can be recovered by knowing it for any fixed ratio λ/µ.The thick red curve corresponds to the case 1/r + 1/s = 1 which gives bounds c(1, 1) from below.For s = 1 the norm U r ,s=1 can be computed analytically, this gives a generalization of the Masssen and Uffink bound (see Lem. 1).