Two-qubit causal structures and the geometry of positive qubit-maps

We study quantum causal inference in a setup proposed by Ried et al (2015 Nat. Phys. 11 414) in which a common cause scenario can be mixed with a cause–effect scenario, and for which it was found that quantum mechanics can bring an advantage in distinguishing the two scenarios: whereas in classical statistics, interventions such as randomized trials are needed, a quantum observational scheme can be enough to detect the causal structure if the common cause results from a maximally entangled state. We analyze this setup in terms of the geometry of unital positive but not completely positive qubit-maps, arising from the mixture of qubit channels and steering maps. We find the range of mixing parameters that can generate given correlations, and prove a quantum advantage in a more general setup, allowing arbitrary unital channels and initial states with fully mixed reduced states. This is achieved by establishing new bounds on signed singular values of sums of matrices. Based on the geometry, we quantify and identify the origin of the quantum advantage depending on the observed correlations, and discuss how additional constraints can lead to a unique solution of the problem.


Introduction
Imagine a scenario where two experimenters, Alice and Bob, sit in two distinct laboratories. At one point Alice opens the door of her laboratory, obtains a coin, checks whether it shows heads or tails and puts it back out of the laboratory. Some time later also Bob obtains a coin and also he checks whether it shows heads or tails. This experiment is repeated many times (ideally: infinitely many times) and after this they meet and analyze their joint outcomes. Assuming their joint probability distribution entails correlations, there must be some underlying causal mechanism which causally connects their coins [1]. This could be an unobserved confounder (acting as a common cause), and they actually measured two distinct coins influenced by the confounder. Or it could be that Alice's coin was propagated by some mechanism to Bob's laboratory, and hence they actually measured the same coin, with the consequence that manipulations of the coin by Alice can directly influence Bob's result (cause-effect scenario). The task of Alice and Bob is to determine the underlying causal structure, i.e.to distinguish the two scenarios. This would be rather easy if Alice could prepare her coin after the observation by her choice and then check whether this influences the joint probability (so-called 'interventionist scheme'). In the present scenario, however, we assume that this is not allowed (so-called 'observational scheme'). All that Alice and Bob have are therefore the given correlations, and from those alone, in general they cannot solve this task without additional assumptions. Ried et al [2] showed that in a similar quantum scenario involving qubits the above task can actually be accomplished in certain cases even in an observational scheme (see below for a discussion of how the idea of an observational scheme can be generalized to quantum mechanics).
In the present work we consider the same setup as in [2], and allow arbitrary convex combinations of the two scenarios: the common cause scenario is realized with probability p, the cause-effect scenario with probability 1−p. Our main result are statements about the ranges of the parameter p for which observed correlations can be explained with either one of the scenarios, or both. For this, we cast the problem in the language of affine representations of unital positive qubit maps [3] in which all the information is encoded in a 3×3 real matrix, as is standard in quantum information theory for completely positive (CP) unital qubit maps [4].
The paper is structured as follows: in section 2 we introduce causal models for classical random variables and for quantum systems. Therein we define what we consider a quantum observational scheme. Section 3 introduces the mathematical framework of ellipsoidal representations of qubit quantum channels and qubit steering maps. In section 4 we define our problem mathematically and prove the main results, which we then comment in the last section 5.
2. Causal inference: classical versus quantum 2.1. Classical causal inference At the heart of a classical causal model is a set of random variables X 1 , X 2 , ..., X N . The observation of a specific value of a variable, X i =x i , is associated with an event. Correlations between events hint at some kind of causal mechanism that links the events [1] . Such a mechanism can be a deterministic law as for example x i =f (x j ) or can be a probabilistic process described by conditional probabilities ( | ) P x x i j , i.e.the probability to find X i =x i given X j =x j was observed. The causal mechanism may not be merely a direct causal influence from one observed event on the other, but may be due to common causes that lead with a certain probability to both events -or a mixture between both scenarios. Hence, by merely analyzing correlations P(x 1 , x 2 , K, x n ), i.e.the joint probability distribution of all events, one can, in general, without prior knowledge of the data generating process, not uniquely determine the causal mechanism that leads to the observed correlations (purely observational scheme). To remedy this, an intervention is often necessary, where the value of a variable X i whose causal influence one wants to investigate, is set by an experimentalist to different values, trying to see whether this changes the statistics of the remaining events (interventionist scheme). One strategy for reducing the influence of other, unknown factors, is to randomize the samples. This is for example a typical approach in clinical studies, where one group of randomly selected probands receives a treatment whose efficiency one wants to investigate, and a randomly selected control group receives a placebo. If the percentage of cured people in the first group is significantly larger than in the second group, one can believe in a positive causal effect of the treatment. The probabilities obtained in this interventionist scheme are so-called 'do-probabilities' (or 'causal conditional probabilities') [5]: ( | ( )) P x x do i j is the probability to find X i =x i if an experimentalist intervened and set the value of X j to the value x j . This is different from ( | ) P x x i j , as a possible causal influence from some other unknown event on X j =x j is cut, i.e.one deliberately modifies the underlying causal structure for better understanding a part of it. If X j =x j was the only direct cause of X i =x i then If instead the event X i =x i was a cause of X j =x j , then intervening on X j cannot change X i : If the correlation between X i =x i and X j =x j is purely because of a common cause, then no intervenion on X i or X j will change the probability to find a given value of the other: i for all x i . Observing these do-probabilities one can hence draw conclusions about the causal influences behind the correlations observed in the occurence of X i =x i and X j =x j .
In practice, direct causation in one direction is often excluded by time-ordering and need not to be investigated. For example, when doubting that one can conclude that smoking causes lung cancer from the observed correlations between these two events, it does not make sense to claim that having lung cancer causes smoking, as usually smoking comes before developing lung cancer. But even dividing a large number of people randomly into two groups and forcing one of them to smoke and the other not to smoke in order to find out if there is a common cause for both would be ethically inacceptable. The needed do-probabilities can therefore not always be obtained by experiment. Interestingly, the causal-probability calculus allows one in certain cases, depending notably on the graph structure, to calculate do-probabilities from observed correlations without having to do the intervention. Inversely, apart from only predicting the conditional probabilities for a random variable say X i given the observation of X j =x j , denoted as ( | ) P x x i j , a causal model can also predict the doprobabilities, i.e.the distribution of X i if one would intervene on the variable X j and set its value to x j . This is crucial for deriving informed recommendations for actions targeted at modifying certain probabilities, e.g.recommending not to smoke in order to reduce the risk for cancer.
The structure of a causal model can be depicted by a graph. Each random variable is represented by a vertex of the graph. Causal connections are represented by directed arrows and imply that signaling along the direction of the arrow is possible. In a classical causal model it is assumed that events happen at specific points in space and time, therefore bidirectional signaling is not possible as it would imply signaling backward in time. Hence the graph cannot contain cycles and is therefore a directed acyclic graph (DAG) [5], see figure 1. The set of parents PA j of the random variable X j is defined as the set of all variables that have an immediate arrow pointing towards X j , and pa j denotes a possible value of PA j . The causal model is then defined through its graph with random variables X i at its vertices and the weights ( | ) P x pa j j of each edge, i.e.the probabilities that X j =x j happens under the condition that Pa j =pa j occurred. The model generates the entire correlation function according to which is referred to as causal Markov condition [5]. When all P(x 1 , K, x n ) are given, then all conditional probabilities follow, hence all ( | ) P x pa j j that appear in a given graph, but in general not all correlations nor all ( | ) P x pa j j are known (see below). The causal inference problem consists in finding a graph structure that allows one to satisfy equation (1) for given data P(x 1 , K, x n ) and all known ( | ) P x pa j j , where the unknown ( | ) P x pa j j can be considered fit parameters in case of incomplete data. With access to the full joint probability distribution, the causal inference only needs to determine the graph. In practice, however, one often has only incomplete data: as long as a common cause has not been determined yet, one will not have data involving correlations of the corresponding variable. For example, one may have strong correlations between getting lung cancer (random variable X 2 Î{0, 1}) and smoking (random variable X 1 ä{0, 1}), but if there is a unknown common cause X 0 for both, one typically has no information about P(x 0 , x 1 , x 2 ): one will only start collecting data about correlations between the presence of a certain gene, say, and the habit of smoking or developing lung cancer once one suspects that gene to be a cause for at least one of these. In this case ( | ) P x x 1 0 and ( | ) P x x 2 0 are fit parameters to the model as well. The possibility of extending a causal model through inclusion of unknown random variables is one reason why in general there is no unique solution to the causal inference problem based on correlations alone. Interventions on X i make it possible, on the other hand, to cut X i from its parents and hence eliminate unknown causes one by one for all random variables. Once a causal model is known, one can calculate all distributions for all possible combinations of interventions and observations, where the i j are the values of the intervention variable I j for the event X j , i j =idle or i j =do(x j ). Here, j j reflects that an intervention on X j deterministically sets its value, independently of the observed values of its causal parents. If I j =idle then the value of X j only depends on its causal parents PA j , i.e.
The field of causal discovery or causal inference aims at providing methods to determine the causal model, that is the DAG and the joint probability distributions entering (1) for a given scenario. Different combinations of the I j correspond to different strategies. If all the interventions are set to idle, and hence all the outcomes are determined by the causal parents, one has the purely observational approach. In multivariate scenarios, where more than two random variables are involved, the observation of the joint probability distribution alone can still contain hints of the causal structure based on conditional independencies [5]. Nevertheless, in the bivariate scenario, i.e.when only two random variables are involved, classical correlations obtained by observations do not comprise any causal information. Only if assumptions for example on the noise distribution are taken a priori, information on the causal model can be obtained from observational data [6].

Quantum causal inference
The notion of causal models does not easily translate to quantum mechanics. The main problem is that in quantum systems not all observables can have predefined values independent of observation. Similiar to an operational formulation of quantum mechanics [7], the process matrix formalism was introduced [8] and a quantum version of an event defined. In [9] this is reviewed for the purpose of causal models. In place of the random variables in the classical case there are local laboratories. Within a process each laboratory obtains a quantum system as input and produces a quantum system as output. A quantum event corresponds to information which is obtained within a laboratory and is associated with a CP map mapping the input Hilbert space to the output Hilbert space of the laboratory. The possible events depend on the choice of instrument. An According to the causal Markov condition, equation (2), the probability distribution then factorizes as instrument is a set of CP maps that sum to a CP trace preserving (CPTP) map. For example an instrument can be a projective measurement in a specific basis, with the events the possible outcomes. The possibility to choose different instruments mirrors the possibility of interventions in the classical case [9, 3.3]. The whole information about mechanisms, which are represented as CPTP maps, and the causal connections is contained in a so-called process matrix. Besides its analogy for a classical causal model, the process framework goes beyond classical causal structures as it does not assume such a fixed causal structure [8]. This recently stirred a lot of research [10][11][12][13]. For a more detailed introduction we refer the reader especially to [9] where a comprehensive description is provided. The analog of causal inference in the classical case is the reconstruction of a process matrix. This can be done using informationally complete sets of instruments, theoretically described in [9, 4.1] and experimentally implemented in [2]. Defining a quantum observational scheme in analogy to the classical one is not straight forward. In general a quantum measurement destroys much of the states' character and hence can almost never be considered a passive observation. For example if the system was initially in a pure state yñ | but one measures in a basis such that yñ | is not an eigenstate of the projectors onto the basis states, then the measurement truly changes the state of the system and the original state is not reproduced in the statistical average. In [9, sect. 5] an observational scheme is simply defined as projective measurements in a fixed basis, in particular without assumptions about the incoming state of a laboratory and thus without assumptions about the underlying process. Another possibility to define an observational scheme is based on the idea that in the classical world observations reveal pre-existing properties of physical systems and that quantum observations should reproduce this. As a consequence, if one mixes the post-measurement states with the probabilities of the corresponding measurement outcomes, one should obtain the same state as before the measurement. That is ensured if and only if operations that do not destroy the quantum character of the state are allowed, as coherences cannot be restored by averaging. Ried et al [2] formalized this notion as 'informational symmetry', but considered only preservation of local states. For the special case of locally completely mixed states, they showed that projective measurements in arbitrary bases possess informational symmetry. This definition of a quantum observational scheme is problematic due to two reasons: firstly, the allowed class of instruments depends on the incoming state, i.e.one can only apply projective measurements that are diagonal in the same basis as the state itself. This is at variance with the typical motivation for an observational scheme, namely that the instruments are restricted a priori due to practical reasons. Moreover, having measurements depend on the state requires prior knowledge about the state of the system, but finding out the state of the system is part of the causal inference (e.g.: are the correlations based on a state shared by Alice and Bob?). Hence, in general one cannot assume sufficient knowledge of the state for restricting the measurements such that they do not destroy coherences.
Secondly, the definition is unnaturally restrictive as it only considers the local state and not the global state. For example if Alice and Bob share a singlet state yñ = ñ-ñ | | | 01 10 2 , then both local states are completely mixed. Hence according to the informational symmetry, they are allowed to perform projective measurements in arbitrary bases. If Alice and Bob now both measure in the computational basis, they will each obtain both outcomes with probability 1/2 and their local states will remain invariant in the statistical average . However, the global state does not remain intact. The post-measurement state is which is not even entangled anymore. But even defining a 'global informational symmetry', i.e.requiring the global state to remain invariant, does not settle the issue in a convenient way, as this would not allow any local measurements of Alice and Bob.
Here we propose three different schemes ranging from full quantum interventions over a quantum observational scheme with the possibility of an active choice of measurements, to a passive quantum observational scheme in a fixed basis that comes closest to the classical observational scheme, see table 1.
The definitions are based on restricting the allowed set of instruments. An instrument is to be understood in the process matrix context. In all three schemes the set of allowed instruments is independent of the actual underlying processes, which is a reasonable assumption, since the motivation for causal inference comes from the fact that states or processes are not known in the first place.
Quantum interventionist scheme: Arbitrary instruments can be applied in local laboratories. These include for example deterministic operations such as state preparations or simply projective measurements. An appropriate choice of the instruments enables one to detect causal structure in arbitrary scenarios, i.e.to reconstruct the process matrix [9]. This scheme resembles most closely an interventionist scheme in a classical scenario but offers additional quantum-mechanical possibilities of intervention.
Active quantum observational scheme: Only projective measurements in arbitrary orthogonal bases are allowed, but no post-processing of the state after the measurement. The latter request translates the idea of not intervening in the quantum realm, as it is not possible to deterministically change the state by the experimenters choice. Depending on the state and the instrument, the state may change during the measurement, hence the scheme is invasive, but the difference to the classical observational scheme arises solely from the possible destruction of quantum coherences. This is a quantum effect without classical correspondence and hence opens up a new possibility of defining an observational scheme that has no classical analog. Repetitive application of the same measurement within a single run always gives the same output. Furthermore, we allow projective measurements in different bases in different runs of the experiment. This freedom allows one to completely characterize the incoming state. This scheme allows for signaling, i.e.there exist processes for which Alice's choice of instrument changes the statistics that Bob observes. As an example consider the process, where Alice always obtains a qubit in the state ñ |1 . She applies her instrument on it, and then the outcome is propagated to Bob by the identity channel. Bob measures in the basis where ñ |1 is an eigenstate. If Alice measured in the same basis as Bob, then both of them deterministically obtain 1 as result. If Alice instead measures in the basis ñ = ñ  ñ then Bob would obtain 1 only with probability 1 2 . This is considered as signaling according to the definition in [9]. Clearly, signaling presents a direct quantum advantage for causal inference compared to a classical observational scheme, and motivates the attribute 'active' of the scheme. In the present work we focus on this scheme, but exclude such a direct quantum advantage by considering exclusively unital channels and a completely mixed incoming state for Alice, as was done also in [2]. It is then impossible for Alice to send a signal to Bob if her instruments are restricted to quantum observations, even if she is allowed to actively set her measurement basis. One might wonder whether the quantum observational scheme can be generalized to POVM measurements. However, these do not fit into the framework of instruments that transmit an input state to an output state, as POVM measurements do not specify the post-measurement state.
Passive quantum observational scheme: For the whole setup a fixed basis is selected. Only projective measurements with respect to this basis are permitted, and it is forbidden to change the basis in different runs of the experiment. This is also what is used in [9] to obtain classical causal models as a limit of quantum causal models. Since the basis is fixed independently of the underlying process, the measurement can still be invasive in the sense that it can destroy coherences, and hence it is still not a pure observational scheme in the classical sense. Nevertheless, Alice cannot signal to Bob here as she has no possibility of actively encoding information in the quantum state, regardless of the nature of the state, which motivates the name 'passive quantum observational scheme'. As without any change of basis it is impossible to exploit stronger-than-classical quantum correlations, this scheme comes closest to a classical observational scheme. And due to the restriction to observing at most classical correlations, it is not possible to infer anything more about the causal structure than classically possible.

Affine representation of quantum channels and steering maps
In this section we introduce the tools of quantum information theory that we need to analyze the problem of causal inference in section 4.

Bloch sphere representation of qubits
A qubit is a quantum system with a two-dimensional Hilbert space with basis states denoted as ñ |0 and ñ |1 . An arbitrary state of the qubit is described by a density operator ρ, a positive linear operator with unit trace, ρ 0, r [ ] tr =1. Every single qubit state can be represented geometrically by its Bloch vector Table 1. Quantum schemes for causal inference: an overview of instruments allowed within different quantum schemes defined in this section.  indicates allowed/possible, X indicates not allowed/impossible.

Arbitrary instruments Arbitrary projections Fixed basis projection Signaling
Causal inference In the active quantum observational scheme signaling is possible in principle. However, in the scenarios considered in this work signaling is not possible, and still causal inference can be successful. b The potential of causal inference in the active quantum-observational scheme is discussed in the main part of this paper. c In the passive quantum-observational scheme no more causal inference than classical is possible.
where s s s s = ( ) , , T 1 2 3 denotes the vector of Pauli matrices.

Channels
A quantum channel  is a CPTP map. A quantum channel maps a density operator in the space of linear operators   r Î ( ) on the Hilbert space  to a density operator in the space of linear operators , , 0 ,t r t r 1 .
This formalism describes any physical dynamics of a quantum system. Every quantum channel can be understood as the unitary evolution of the system coupled to an environment [4]. The constraint of complete positivity can be understood the following way. If we extend the map  with the identity operation of arbitrary dimension, the composed map   Ä , which acts on a larger system, should still be positive. An example of a map that is positive but not CP is the transposition map, that, if extended to a larger system, maps entangled states to operators that can be non-positive-semi-definite [3, chapter 11.1].

Geometrical representation of qubit maps
Every qubit channel (a quantum channel mapping a qubit state onto a qubit state)  can be described completely by its action on the Bloch sphere, see [14][15][16] and is completely described by the matrix  Q mapping the 4D where the upper row ensures trace preservation. A state ρ described by its Bloch vector r is then mapped by the quantum channel  to the new state r¢ with Bloch vector A qubit channel is called unital if it leaves the completely mixed state invariant: The whole information is then contained in the 3×3 real matrix  T , which we refer to as correlation matrix of the channel. The matrix T (from now on we drop the index  ) can be expressed by writing it in its signed singular value (SSV) decomposition [15, equation (9) Here, R 1 and R 2 are proper rotations (elements of the SO(3) group), corresponding to unitary channels, that is 3 is a real diagonal matrix. This can be interpreted rather easily. A unital qubit channel maps the Bloch sphere onto an ellipsoid, centered around the origin, that fits inside the Bloch sphere. First the Bloch sphere is rotated by R 2 than it is compressed along the coordinate axis by factors h i . The resulting ellipsoid is then again rotated. Hence, apart from unitary freedom in the input and output, the unital quantum channel is completely characterized by its SSV [ The allowed values for h lie inside a tetrahedron  CP (the index CP stands for completely positive),

Steering
In quantum mechanics, measurement outcomes on two spatially separated partitions of a composed quantum system can be highly correlated [17], and further the choice of measurement operator on one side can strongly influence or even determine the state on the other side [18], a phenomenon known as 'steering'.
12 12 [19, p.3]. This formula becomes particularly easy for the case of a maximally entangled two qubit state ρ AB . Here the marginals are completely mixed, hence equation (9) reduces to Steering maps have been intensely studied especially in terms of entanglement characterization [20,21]. In analogy to the treatment of qubit channels, we can associate an unique ellipsoid inside the Bloch sphere with a two qubit state, known as steering ellipsoid, that encodes all the information about the bipartite state [20].
Every bipartite two qubit state can be expanded in the Pauli basis as

AB
Note that we defined Θ to be the transposed of the one defined in [20], since we want to treat steering from Alice to Bob. The matrix contains all the information about the bipartite state and can be written as where a (b) denotes the Bloch vector of Alice's (Bob's) reduced state.  T is a 3×3 real orthogonal matrix and encodes all the information about the correlations, and we will refer to it as correlation matrix of the steering map.
In this work we only consider bipartite qubit states which have completely mixed reduced states or equivalently = = a b 0. In analogy to unital channels we call such states unital two qubit states and the corresponding maps unital steering maps. Up to local unitary operations on the two partitions, the correlation matrix  T is characterized by its SSV η 1 , η 2 , η 3 . The allowed values of these are given through the positivity constraint on the density operator ρ AB defined up to local unitaries as (see equation (6) in [21]) The positivity of ρ AB implies the conditions (the derivation is analog to the derivation of (10)- (15) These are the same as for unital qubit channels (equation (6)) up to a sign flip, and define the tetrahedron  CcP of unital completely co-positive trace preserving maps (CcPTP) [3,15]  º

Positive maps
We have seen that a quantum channel is a CPTP map and that a steering map is a CcPTP map. Both of them are necessarily positive maps. But are there positive maps that are neither CcP nor CP? Or are there maps that are even both? This issue is nicely worked out in [3, chapter 11]. We shortly review this for unital qubit maps. Since we still deal with linear maps, it is straight forward that also every unital positive one qubit map can be described by a 3×3 correlation matrix. Hence we can also analyze its SSV. The allowed SSV are inside the cube  defined by [3, FIG. 11.3] This is illustrated in figure 2. Note again that we only treat unital maps. We see that there are positive maps which are neither CP nor CcP. According to the Størmer-Woronowicz theorem (see e.g. [3, p. 258]) every positive qubit map is decomposable, i.e.it can be written as a convex combination of a CP and a CcP map. Maps that are both CP and CcP are called super positive (SP). The set of allowed SSV of the correlation matrices of these maps forms an octahedron (green region in figure 2) given as i SP whereê i denotes the unit vector along the i-axis. These correlations are generated by entanglement breaking quantum channels [22] and steering maps based on separable states [20]. When such classical correlations are observed one cannot infer anything about the causal structure [2, p.10 of supplementary information].
For higher dimensional systems things change. Already for three dimensional maps, i.e.qutrit maps, there exist positive maps, that cannot be represented as a convex combination of a CP and a CcP map [3], chapter 11.1. In the next section we discuss how much information about causal influences we can obtain by looking only at the SSV related to the correlations Alice and Bob can observe in a bipartite experiment.  (16). Quantum channels corresponding to CPTP maps lie within the blue tetrahedron  CP defined in (7), steering maps corresponding to CcPTP maps lie within the yellow tetrahedron  CcP defined in (14). The maps with SSV inside the intersection of  CP and  CcP (green octahedron) are called super positive. These maps only produce classical correlations corresponding to separable states or entanglement breaking channels, but can also be generated by mixtures of quantum correlations.

Setting
We now tackle the problem of causal inference in the two qubit scenario [2]. The setting is as follows. An experimenter, Alice, sits in her laboratory. She opens her door just long enough to obtain a qubit in a (locally) completely mixed state and closes the door again. She performs an projective measurement in any of the Pauli eigenbases, records her outcome, opens her door again and puts the qubit in the now collapsed state outside. Apart from the qubit she has no way of interacting with the environment. Some time later another experimenter, Bob, opens the door of his laboratory and obtains a qubit. Also he measures in the eigenbasis of one of the Pauli matrices and records the outcome. They repeat this procedure a large (ideally: an infinite) number of times. Then they meet and analyze their joint measurement outcomes. These define the probabilities ( | ) P a b j i , , for the outcomes aä{−1, 1} and bä{−1, 1} of Alice's and Bob's measurements, given they measured in the eigenbasis of the jth and ith Pauli matrix, respectively. For the marginals we assume = å = " Î - 1 is the probability that Bob obtains outcome 1 when measuring the observable σ i , conditioned on Alice's measurement of σ j with outcome 1, and s s á ñ j i denotes the expectation value of the product of Alice's σ j and Bob's σ i measurement outcomes.
The correlation matrix defines a unique positive trace preserving unital map  r r  : A B . They are guaranteed one of the following three possibilities: either they measured the same qubit, which was propagated in terms of a unital quantum channel  from Alice to Bob; or that they each measured one of the two qubits in a unital bipartite state ρ AB acting as a common cause, and hence the correlations where caused by the corresponding steering map ; or that the map from ρ A to ρ B is a probabilistic mixture where with probability p the steering map  was realized and with probability (1−p) the quantum channel  (see figure 3), that is with the 'causality parameter' p ä [0, 1]. The task of Alice and Bob is now to find the true value of p and possibly also the nature of  and  . In general there does not exist a unique solution and in this case they want to find the values of p for which maps of the form (19) explain the observed correlations. As we mentioned in the previous section, every positive one qubit map is decomposable, so a possible explanation always exists. The decomposition (19) can be given a causal interpretation, where  is considered to be a cause-effect explanation of the correlations and  a common cause.
In the following subsections we give bounds on the causality parameter p and then consider some extremal cases. In section 4.4 we generalize a part of the work of Ried et al [2] and see how additional assumptions on the nature of  and  can lead to a unique solution.
In the following let M, E, S denote the correlation matrices of    , , , and    h h h , , the SSV of M, E, S, respectively. We first investigate for a fixed p what the possible SSV of the correlation matrix of a map  are, such that  is p-causal. This leads to the following theorem:  (see figure 4) where the vertices v i CP of CP maps are given in (8), and the vertices v j CcP of CcP maps in (15).
Proof. '': From (22) we see that CP CcP Now define º å q p i j ij and º å r p j i ij . Clearly q i , r j 0 and å = å = q r 1 i i j j . We can then write  We have seen that for a given value of p the allowed SSV associated with a positive map  that is p-causal lie within  p given in (22). We now turn the task around and go back to the causal inference scenario. Given a positive map  we want to tell if we can bound the causality parameter p. We will do this based on the following definition:  . Then the causal interval of  is given by ) defining a vertex of the CPTP (CcPTP) tetrahedron  CP ( CcP ).
Note that the assumption   h 0 i for i=1, 2 can always be met, using the unitary freedom in the decomposition in the right way.
Proof. We show the theorem for p max , the determination of p min can be treated in an analog way.
First  figure 5). Since this facet is perpendicular to v 1 CP ,  h lies on this facet if its projection onto v 1 CP equals the vector pointing from the origin to the intersection of the facet and v 1 CP , given as º - figure 5. Hence we get the following equation

Extremal cases
In the previous section we found the general form of the causal interval  I for an observed map . We now analyze the extremal cases where the interval reduces to a single value or on the other hand the interval is given as  = [ ] I 0, 1 . As already noted in [2, table 1]. there are extremal cases that allow for a complete solution of the problem even without any additional constraints. This is the case if  h equals one of the vertices of the cube of positive maps, see figure 2. The solution is then either p=0 (pure cause-effect), which was actually already noted in [23] as the value 1 of a 'causality measure', if the SSV are all positive or exactly two are negative or p=1 (pure common cause) if the SSV are all negative or exactly one positive. The exact reconstruction of  or  in this cases is trivial. Interestingly, with theorem 4.2 we can show that every point on the edges of the cube  defined in (16) gives us a unique solution without additional constraints: Proof. Let  be a positive map and M be the corresponding correlation matrix with , and two rotations R 1 , R 2 ä SO(3). Due to the freedom in R 1 and R 2 this describes all maps with corresponding vector of SSV on one of the edges of the cube  defined in (16). According to theorem 4.2 we find By theorem A.1 it follows, that the maps  and  in the decomposition (19) necessarily correspond to extremal points in  CP and  CcP defined in (7) and (14) (unitary channel and maximally entangled state). It is then obvious that is the only possible solution. + Note that for arbitrary pä[0, 1] the SSV of the correlation matrix M lie on the edges of the cube  if and only if E and S have a SSV decomposition with respect to the same rotations and the SSV lie on adjacent vertices of . The proof for this is provided in the appendix A.2.
In the other extreme case, if the map  is superpositive, i.e.CP and CcP (see figure 2), it could be explained by a pure CPTP, a pure CcPTP map, or any convex combination of those two. Therefore one cannot give any restrictions of possible values of p [2, III.E of supplementary information].
Proof. Let  be a superpositive map. There exists a SSV decomposition of its correlation matrix for which   h Î SP , defined in (

Additional assumptions / Causal inference with constrained classical correlations
So far we only assumed that our data is generated by a unital channel and a unital state (a state whose local partitions are completely mixed). We have seen that in some extreme cases a unique solution to the problem can be found. Ried et alshowed that one can always find a unique solution for p if one restricts the channel to unitary channels and the bipartite states to maximally entangled pure states [2]. Furthermore, it is then possible to reconstruct the channel and the state up to binary ambiguity, meaning there are two explanations leading to the same observed correlations. The ellipsoids associated with unitary channels and maximally entangled states are spheres with unit radius and the SSV of their correlation matrices correspond to the vertices of  CP and  CcP respectively. In the following we investigate this scenario again, but add a known amount of noise in the channel or in the bipartite state. For the channel this is done by mixing the unitary evolution with a completely depolarizing channel [4]. The completely depolarizing channel maps every Bloch vector to the origin,  r  2 and hence is represented by the zero matrix. The ellipsoid associated with the mixture of a completely depolarizing channel with a unitary channel thus results in a shrinked sphere. For strong enough noise the result eventually becomes an entanglement breaking channel, which only produces 'classical' correlations [22]. Due to the unitary freedom compared to standard depolarizing channels, we call these channels generalized depolarizing channel. For the state we mix a pure maximally entangled state with the completely mixed state, whose correlation matrix is given by the zero matrix. We call the state a generalized Werner state, in the sense that instead of a convex combination of a singlet and a completely mixed state [24] we allow the convex combination of an arbitrary maximally entangled state with the completely mixed state. States at a certain threshold of noise become separable and the correlations become 'classical' [20]. We will then see that even when confronted with purely classical correlations, if we have enough a priori-knowledge about the data generation, i.e.we know the amount of noise, we can still find a solution analogous to [2], in the sense of determining uniquely the parameter p, and the channel and the state up to binary ambiguity 1 . We will first keep the unitary channel and start with a generalized Werner state and show how one can recreate the scenario of Ried et al. Then we will add the noise in the channel.

Solution of the causal inference problem using generalized Werner states
The analysis follows closely in spirit section 3.4 in the supplementary information in [2]. We start again with equation (19) and assume that the steering map  is generated by a shared generalized Werner state , where the parameter ò ä (0, 1) is known and fixed in advance and yñ | is an unknown maximally entangled pure state. The map  is generated by an unknown unitary channel U.
Since ò is fixed, the class of allowed explanations is completely defined up to unitary freedom in the channel and in the state. Hence the number of free parameters is the same as in the case considered in [2], which coincides with the case ò=0. For ò2/3 the state ρ AB becomes separable, i.e.is not entangled anymore, see [24] and figure 5 in the supplementary information of [20]. But the reconstruction works independently of ò. Hence, we see here that the possibility of reconstruction hinges not on the entanglement in ρ AB but on the prior knowledge we have about ρ AB .
The correlation matrix corresponding to the generalized Werner state is simply the one of a maximally entangled state shrinked by a factor 1−ò and will thus be denoted (1−ò) S, where S is the correlation matrix corresponding to a maximally entangled state. Thus in our scenario the information Alice and Bob obtain characterizes the matrix The ellipsoid is described by the eigenvalues and -vectors of MM T . The eigenvectors correspond to the direction of the semi axes and the squareroots of the eigenvalues are their lengths. There is one degenerate pair and another single one. The eigenvector corresponding to the non-degenerate semi axis is parallel to n which is defined as the axis on which the images of a point on the Bloch sphere under S and E are diametrically opposed. Hence the length of this semi axis is On the other hand, if we do not have prior knowledge of ò, then in general we cannot determine p with (32). This ambiguity can easily be illustrated by looking at an example. Take s = U x and yñ = ñ-ñ | | | 00 11 2 . We then have: Combining this for arbitrary ò and p gives Hence for all values of the parameters where pò=cons., the measurement statistics for Alice and Bob are exactly the same and there is no way to distinguish different pairs of values. Analogously to using a generalized Werner state for the steering map, we can also use a generalized depolarizing channel. Then, with prior knowledge of the amount of noise, we can still find a complete solution even though the resulting channel might be entanglement breaking.

Generalized depolarizing channel and generalized Werner state
We shall now consider the case where both the channel as well as the state are mixed with a known amount of noise. Therefore we take  ¢ = - The reconstruction works as follows. Without loss of generality we assume ò e ò c (in the other case we just have to make the reconstruction discussed in the previous subsection for the entanglement breaking channel and not for the Werner state). The only thing we have to do is to divide by (1−ò e ) to restore the problem of the previous section The rest can then be solved as in the previous subsection. Again we remark that nothing changes if we have ò c 2/3 and ò e 2/3 even though at that transition the states become separable and the channels entanglement breaking, respectively.

Discussion
In this work we extended the results initially found by Ried et al [2]. We introduced an active and a passive quantum observational scheme as analogies to the classical observational scheme. The passive quantum observational scheme does not allow for an advantage over classical casual inference. In the active quantum observational scheme Alice and Bob can freely choose their measurement bases, which in principle allows for signaling. However, we investigated the quantum advantage over classical causal inference in a scenario where signaling is not possible in the active quantum observation scheme, as Alice' incoming state is completely mixed.
We showed how the geometry of the set of SSV of correlation matrices representing positive maps of the density operator r r  A B determines the possibility to reconstruct the causal structure linking ρ A and ρ B . We showed that there are more cases than previously known for which a complete solution of the causal inference problem can be found without additional constraints, namely all correlations created by maps whose SSV of the correlation matrix lie on the edges of the cube of positive maps  defined in (16). A necessary and sufficient condition for the SSV of the correlation matrix to lie on one of the edges is that it is any mixture of a maximally entangled state with an unitary channel and that their corresponding correlation matrices have a SSV decomposition involving the same rotations with the resulting SSV on two adjacent vertices of .
For correlations guaranteed to be produced by a mixture of a unital channel and a unital bipartite state, we quantified the quantum advantage by giving the intervals for possible values of the causality parameter p. Here, in order to constrain p, and hence have an advantage over classical causal inference, it is necessary that the correlations were caused by an entangled state and/or an entanglement preserving channel. This is because correlations caused by any mixture of a separable state and an entanglement breaking quantum channel always describe SP maps. According to theorem 4.2 the causal interval for any SP map  is  = [ ] I 0, 1 . Hence, SP maps do not allow any causal inference. Things change when we further strengthen the assumptions on the data generating processes and allow only unitary freedom in the state, corresponding to a generalized Werner state with given degree of noise ò c , or unitary freedom in the channel, corresponding to a generalized depolarizing channel with given degree of noise ò e . We showed that in this scenario the causality parameter p can always be uniquely determined and in most cases the state and the channel can be reconstructed up to binary ambiguity. For ò c 2/3 the state becomes separable and for ò e 2/3 the channel entanglement breaking but still causal inference is feasible. Therefore entanglement and entanglement preservation are not a necessary condition in this scenario. The assumptions on the data generating processes, i.e. a priori knowledge of ò c and ò e , are strong enough, such that even correlations corresponding to SP maps reveal the underlying causal structure. 1 2 where O 1,2 are orthogonal matrices ( the SSV decomposition (also called real SV [25]) of A, where Î ( ) R SO n i are orthogonal matrices with determinant equal to one. In the 3×3 scenario these correspond to proper rotations in  3 . The diagonal matrix ¢ D contains the SSV of A. The SSV have the same absolute values as the SV but additionally can have negative signs. Concretely, the freedom in choosing R 1 and R 2 allows one to get any permutations of the SV on the diagonal of ¢ D together with an even or odd number of minus signs, depending on whether A has positive or negative determinant, respectively. If at least one singular value equals 0, the number of signs becomes completely arbitrary. Using the same matrix B as above we give two different SSV decompositions as an example: For the SSV decomposition we define a canonical order with the absolute values of the SV sorted in decreasing order and only a negative sign on the last entry if the matrix has negative determinant, as in (48). The rotational freedom in (46) allows for arbitrary permutations of the order of SV and addition of any even number of minus signs.
Confusion may arise since for example an  3 permutation matrix corresponding to a permutation of exactly two coordinates has determinant −1, so why would it be allowed? The point is, that we not only want to permute elements of a vector, but the diagonal elements of a matrix. We illustrate that by permuting two components of (i) a vector and (ii) a diagonal matrix. I.e.as p p -= ( ) · ( )P R R 2 x y yz the effect of permuting the second and third diagonal entry of a diagonal matrix can also be obtained by proper rotations, and correspondingly for other permutations of the SSV. Hence all permutations of the SSV are allowed.
Fan [26] gave bounds on the SV of A+B given the SV of two real matrices A and B, derived from the corresponding results for eigenvalues of Hermitian matrices and using that the matrix º´⎛ ⎝ ⎜ ⎞ ⎠ ⎟ A A A 0 0 n n T n n Figure A1 presents an illustration of the case n=2. where the second equation follows from the linearity of matrix addition in every element and the last equality from (52). To see (60) define D ¢ w analogously to (53) but without the constraint  = n n s 1, i.e.allowing arbitrary sign flips. The analog statements of (57) and (58) hold if we exchange the SSV with the absolute SV, proper rotations (elements of SO(n)) with orthogonal matrices (elements of O(n)), and D w with D ¢ w . We then find, that s + Î S ¢ With equation (57) the diagonal entries of R ER 1 2 and R SR 1 2 are constrained to  CP and  CcP , respectively. In order to fulfill the second equality in equation (63) it is hence necessary that the first two diagonal entries of R ER 1 2 and R 1 S R 2 are equal to one, respectively. The only elements in  CP and  CcP allowing for this are (1, 1, 1) and (1, 1,−1), respectively. Hence the diagonal entries of R 1 E R 2 and of R 1 S R 2 equal their SSV. To see that this implies that R 1 E R 2 and R 1 S R 2 are diagonal matrices, and hence E and S have a common SSV decomposition, consider the Frobenius norm, which for a n×n real matrix A given as å å s , 64 where we used that A T A has the squared SV (and hence the squared SSV) of A on its diagonal [25]. Now if A has its SSV on the diagonal, i.e. s =˜( ) A A ii i for i=1, K, n , equation (64) directly implies A ij =0 for ¹ i j, and hence A is diagonal. Now that E and S have a common SSV decomposition the SSV of M are just the convex combination of the SSV of E and S. If those lie on the same edge of , i.e.they are adjacent, then also those of M lie on the connecting edge. However if the SSV are not adjacent then their convex combination results in a point strictly inside . +