Updating the Born rule

Despite the tremendous empirical success of quantum theory there is still widespread disagreement about what it can tell us about the nature of the world. A central question is whether the theory is about our knowledge of reality, or a direct statement about reality itself. Regardless of their stance on this question, current interpretations of quantum theory regard the Born rule as fundamental and add an independent state-update (or"collapse") rule to describe how quantum states change upon measurement. In this paper we present an alternative perspective and derive a probability rule that subsumes both the Born rule and the collapse rule. We show that this more fundamental probability rule can provide a rigorous foundation for informational, or"knowledge-based", interpretations of quantum theory.


Introduction
Knowledge-based, or informational, views of quantum theory are popular for a variety of reasons. Perhaps one of the strongest motivations for this perspective comes from the conceptual difficulties that surround quantum state collapse upon measurement. If quantum states are a direct description of reality then this seems to demand that collapse is a nonlinear, stochastic and temporally ill-defined physical process [1][2][3][4]. From a 'knowledge' perspective however, collapse is seen merely as a form of information update, no more problematic than classical probabilistic conditioning [5][6][7][8][9][10][11][12]. Whilst compelling, there is an obvious problem with this kind of approach: classical probabilistic conditioning treats multiple consecutive events on a single system on exactly the same footing as multiple events on distinct systems: joint probabilities are defined in exactly the same way in each case. Thus, classical joint probabilities can be assigned to events in a manner that is independent of the spatiotemporal relationships between those events. In quantum mechanics however, the Born rule does not assign joint probabilities to consecutive events [13], figure 1. This means that knowledge-based interpretations, where one argues that the Born rule is fundamental and the state update rule 'merely a case of probabilistic conditioning', are deeply unsatisfactory. Both rules have to be introduced and justified separately.
In this paper we aim to provide a solution to this problem and breathe new life into the knowledge-based view of quantum theory. We present a proof of a unified quantum probability rule that subsumes both the Born rule and the state update rule. This rule is useful in a variety of contexts, from quantum information [14][15][16][17][18][19] to quantum causal modelling [20][21][22][23], and non-Markovian dynamics [24][25][26][27]. Dubbed the 'Quantum Process Rule', we prove that one can derive this higher-order, generalised form of the standard quantum probability rule from the structure of quantum operations and a single non-contextuality assumption. This approach is analagous to Gleason's [28] and related [29][30][31][32] derivations of the ordinary Born rule. We also show that using this more fundamental approach, where one assigns joint probabilities to arbitrary quantum events, it is possible to derive both the Born rule and the state update rule. A key conceptual advantage is that state update, or 'collapse' need no longer be viewed as an ad hoc ingredient, independent and estranged from the core of the theory.

Measurements and the Born rule
In order to introduce the fewest possible assumptions, we take an explicitly operational perspective. Operational theories can be phrased in terms of events, which define the results of measurements. Each time a measurement is performed on a system, a number of possible events can be observed. The set of all events that can result from a specific measurement is called a context. It is natural when constructing such a theory to assume measurement non-contextuality [6,30]. This means that operationally indistinguishable events should have the same mathematical representation in the theory 2 . Clearly, any probabilistic theory can be formulated in a noncontextual way by appropriate relabelling of the mathematical objects describing events.
In this setting, the minimal task of a physical theory is to non-contextually assign probabilities to such measurement events. In essence this is the 'probability rule' of the theory and also defines the relevant statespace. One can represent any such non-contextual probability rule (the Born rule being a prime example) by means of a frame function. This is a function that associates a probability to every event, independently of the context to which it belongs, such that probabilities for all events in a given context sum up to one 3 . Crucially, the frame function is not a probability distribution over the space of all events, as that would require a normalised measure over the entire space.
As was shown in [29][30][31][32], the notion of a frame function can be used to derive the Born rule as the appropriate non-contextual probability rule to apply when one identifies events with the results of a measurement on a quantum system. In this approach, events are identified with quantum effects: for a d-level quantum system, the full set of quantum effects is defined as . The subscript j labels the list of possible measurement outcomes for a given context.
Assuming measurement non-contextuality here means that the probability of a particular quantum effect (equivalently, measurement outcome) is assumed to be independent of the context (POVM) to which it belongs. Operationally, this means that the probability assigned to a given event does not depend on any extra information regarding how it was achieved.
A frame function for quantum effects is defined as a mapping from the set of all effects to the unit interval: Figure 1. Quantum probability rules. (a) The Born rule assigns probabilities to measurements on distinct systems: for a state ρ, and measurement operators E A, B , the probability is For two consecutive measurements on the same system, one cannot apply the Born rule without first updating the state. The state update rule, defined as for a completely positive map  A describing the first measurement, is typically introduced as an independent axiom in the theory. 2 In some works, a distinction is made between operational and ontological versions of non-contextuality [33][34][35], where the latter are used to rule out hidden-variable models. Although the expression 'measurement non-contextuality' was introduced in the ontological setting [33], we here use it in the operational sense (corresponding to the simple expression 'non-contextuality' used in [6,30]). The addition of 'measurement' is made here in order to distinguish the notion from 'instrument non-contextuality,' a term we introduce in the next section. 3 The term frame function used in this paper is distinct from Gleason's original use of the term. In Gleason's terminology, we are here considering a positive frame function of weight one. This is in accordance with the definition of a frame function in [29,30]. ( ) f E E Tr j j 4 . The proof in [30] follows three simple steps. First, one proves linearity of the frame function over the field of nonnegative rational numbers, then extension to full linearity is obtained by proving continuity of the frame function. Then, as the frame function has been proved to be linear, it can be recast as arising from an inner product. In particular, using the Hilbert-Schmidt inner product on the operator space   ( ), the frame function can be written as for some positive semidefinite, unit-trace operator ρ. This both characterises the Born rule and also defines the density operator as the appropriate object to represent the quantum state.
As we have noted, the above proof does not tell us how to assign probabilities to consecutive events. That is, assuming we know the state of a quantum system prior to measurement, the Born rule alone does not tell us how to update this state following measurement. To remedy this situation, we now wish to provide a similar proof for a probability rule that can subsume both the Born rule and the state update rule.

Instruments and the quantum process rule
We consider more general operational primitives than those of [30] and instead consider local regions where one can perform actions that are associated with outcomes. The class of allowed local actions is broad: one can perform measurements, realise transformations, or even add and discard ancillary systems. Such actions can also be associated with local outcomes and we define a particular single case outcome, associated to a given action, as the relevant event.
The event thus now labels not only the outcome but also any concurrent transformation to the local system.
Just as with effects in the traditional approaches, we assume a minimal operational labelling for transformations: different interactions of the system with an environment, that cannot be distinguished by looking at the system alone, will be assigned the same label.
If we consider a particular run of an experiment there will in general be a collection of such events that occur, one for each local region. One can associate a joint probability to this set of events, and, given enough runs of an experiment, one can empirically verify probability assignments for each possible permutation of events.
Formally, an event in region A is represented by a completely positive trace-non-increasing (CP) map where input and output spaces are the spaces of linear operators over input and output Hilbert spaces of the local region, O for the set of linear maps from A I to A O . We denote the set of CP maps associated to each region, Ì L CP X X . We demand complete positivity because in principle it should be possible to perform arbitrary quantum operations in the local region. This includes performing operations on a subsystem that is part of a larger system. Complete positivity means that, for arbitrary dimensions of an ancillary system ¢ for a trace non-increasing map [37,38]. The context for each set of CP maps is now no longer a POVM but rather a quantum instrument [39]. An instrument thus represents the collection of all possible events that can be observed given a specific choice of local action. Given a local region A, an instrument is formally defined as a set I A of CP maps that sum up to a completely positive trace-preserving (CPTP) map: We are now in a position to define the relevant frame function and derive the appropriate probability rule for this scenario. Just as the Born rule tells us how to calculate the probability of a particular outcome given the relevant measurement operator, the Quantum Process Rule should tell us how to assign a joint probability to each possible collection of local events given the relevant instruments. We assume 'instrument' noncontextuality, rather than 'measurement' non-contextuality. That is, the joint probability for a set of events, one for each region, is independent of the particular context (set of instruments) to which they belong, see figure 3.
As for [30], the non-contextuality assumption is formalised by requiring that probabilities are given by a frame function. Each 'frame' is now a collection of instruments, one per region, rather than a single POVM.

Definition 1.
A frame function, f, for a set of local non-intersecting regions X=A, B, C, ..., is defined by: 1. f is a function from the Cartesian product of the set of CP maps associated to each region, Ì L CP X X , to the unit interval:´´ f is normalised for all sets of CP maps,  X , that form instruments I X ,  We now show that this definition is sufficient to derive the new probability rule. As in [30] we first prove linearity of the frame function. Theorem 1. The frame function f is a convex-multilinear functional on´´ĆP CP CP ...

A B C
By convex-multilinear we mean: and similarly for all other regions B C , , ...

Proof.
We fix instruments at all regions, except for region A, to be instruments with a single CPTP map each:   , , ...

B C
Consider two instruments applied in region A: The frame function constraints imply: and thus we have additivity for CP, trace non-increasing maps. We next prove homogeneity of the frame function for the rational numbers between 0 and 1. Take two integers    Î n m 1 , and a CP, trace-non-increasing map . By converting multiplications by integers into sums, from additivity of the frame function we have:  Just as in ordinary quantum mechanics a state is defined as a linear functional over effects (POVM elements), we can define a multilinear functional over sets of events (CP maps) as a process, in accordance with the terminology of [20,22,[41][42][43][44][45][46][47][48].
We next use the fact that a linear functional can be expressed by means of an inner product. This enables us to derive a new probability rule using our frame function, and also gives the appropriate form for the matrix representation of a process.
First consider that because each convex space CP X contains a basis of the entire linear space L X , X=A, B, ..., the frame function f can be extended by linearity to a multilinear function on the spaces L A , L B , L C , ... Again by linear extension, this defines a unique linear function on the product space Next, it is easy to show that a natural inner product between any two linear maps  A ,  A ä L A is defined as follows (see the appendix for details): is a Hilbert-Schmidt basis for the d-dimensional input space: . One can also represent this inner product in a more convenient (and familiar) form by representing the CP maps associated to each region as Choi-Jamiolkowski (CJ) matrices [49,50]. Recall, a CP map associated to a region A, where input and output spaces are the spaces of linear operators over input and output Hilbert spaces, A O , respectively, can be represented as a matrix 5 : where T denotes transposition in a chosen basis and ñ = {| } j j d 1 A I is an orthonormal basis in  A I . We show in the appendix that the inner product (10) can be expressed as and it is independent of the choice of Hilbert-Schmidt basis. This inner product determines an isomorphism between elements of Ä Ä Ä L L L ...

A B C
and linear functionals on the same space. We can thus define a trace rule that allows one to determine the joint probability for a set of CP maps, one for each region: is the linear map that uniquely defines f and W f AB ... is its CJ representation, called the process matrix. (In the following, we will drop the subscript f.) Positivity and normalisation of the frame function, equations (5) and (6) respectively, impose constraints on the operators W that define valid processes. The set of process matrices, together with the expression (13) for the frame function, defines the Quantum Process Rule. As discussed in [20], the set of matrices characterised by positivity, equation (5), is further restricted if one assumes that local operations can be extended to act on additional multipartite quantum states shared among the regions. The overall result can be summarised as follows: Theorem 2. Given a set of regions = X A B , , ... where arbitrary quantum operations can be performed, any instrument non-contextual probability assignment, expressed through a frame function as per definition 1, must be given by the Quantum Process Rule, equation (13), where the process matrix W satisfies the conditions If one additionally assumes that each operation in region X can be extended to act on an additional input space ¢ X I , with an arbitary multipartite state r ¢ ¢ A B ...

I I
shared across the regions, then the process matrix must be positive semidefinite,  W 0, a strictly stronger condition than equation (16).
Property (15) is the CJ representation of the trace-preserving condition; therefore, the normalisation constraint says that CPTP maps can be performed with unit probability. The resulting constraint on process matrices differs from the analogous one for density matrices, r = Tr 1. In addition to an analogous affine constraint, = W d Tr O with d O the product of all output dimensions, W has to satisfy further linear constraints. We refer to appendix B of [42] for an explicit characterisation of such constraints.

Recovering the state update and Born rule
Let us recapitulate the rationale so far: it was shown in [30] that if we accept the structure of quantum measurements, we can identify quantum probabilities as the most general non-contextual probability assignments. Whereas this approach only considers a single measurement/event-or at most measurements of separate quantum systems-in the quantum process approach outlined above we derive a general rule to assign joint probabilities to an arbitrary number of events. The ordinary Born rule is thus recovered from the general one in the case where a single region is considered-in which case instruments reduce to POVMs and process matrices reduce to density matrices [20].
We are in particular interested in the situation where two consecutive measurements are performed on a single quantum system. Gleason-type derivations of quantum probabilities do not tell us how to assign joint probabilities to two such events: one must introduce an additional ingredient-the state update rule. If the statistics for the first measurement are described by a density matrix ρ, and the first measurement is described by a CP map , one calculates the probabilities for the second measurement, given the outcome of the first is known, by applying the Born rule to the updated state  (17) is seen as a quantum analogue of classical knowledge update. Within the quantum process framework, this is more than an analogy: the update rule is derived from the joint probability assignment.
To make the argument rigorous, we should remark again that the quantum frame function is not a normalised probability measure over the entire space of potential events. Formally, the frame function defines a parametrised probability for observing a CP map  X given an instrument I X in regions X=A, ...: is assigned.) Even though the inclusion of instruments is necessary to define (18) as a classical probability, we will omit them in the following for notational convenience.
Expression (18) defines an ordinary, classical probability measure, which lets us use all the machinery of classical probability theory. In particular, the conditional probability to observe  B in region B, given that  A is observed in region A, can be calculated from the joint probability distribution: Relevant to the ordinary state update rule is the case where A precedes temporally B, and the evolution between the two events is trivial. This scenario is described by the process matrix (see, e.g., [22]) where ρ is the density matrix describing the input state of region A. A straightforward calculation shows that, in this case, the conditional process matrix reduces to which is the process matrix description of region B receiving a state described by the density matrix  r  A.
Conditioning versus updating Some clarification at this point might be helpful regarding the distinction between probabilistic conditioning and knowledge update (see, e.g., [52] for a more detailed discussion). In classical probability theory, the rules for probabilistic conditioning simply follow from the axioms of the theory. These axioms can be assumed, as in Kolmogorov's approach, or derived from requirements on how one should consistently assign degrees of belief, for example through Dutch book arguments [53] or theorems like that of Savage [54]. A consequence of probabilistic conditioning is Bayes' theorem for inverting conditional probabilities: Importantly, this rule does not involve updating knowledge given new information: all information is contained in the joint probability P (A, B), which is unchanged by the application of the theorem. Knowledge update, on the other hand, refers to the process of updating one's belief following the acquisition of new data. This process is not encoded in the axioms of probability theory and requires extra assumptions. For example, if one assumes that all data values that were not observed can be discarded, one arrives at Bayes rule: where b is the observed value of the variable B. Implicit in this rule is the counterfactual assumption that values that are not observed are known to be false. Thus, applying Bayes theorem, one arrives at the standard form for Bayes updating: In the quantum case, such counterfactual assumptions are known to be problematic [55] 6 . In our approach, no such assumption is necessary, because the primitive object is the joint probability. From this perspective, the 'state update' rule is not an update at all, but rather an application of probabilistic conditioning. This insight distinguishes this approach from other attempts to leverage Bayesian arguments to justify an informational interpretation of quantum theory [21,23].

Discussion
In this work we have shown that it is possible to use a Gleason-type approach to derive a quantum probability rule that subsumes both the Born rule and the state update rule. By using the structure of local quantum operations and a reasonable non-contextuality assumption we have derived both the new rule and the appropriate object to represent the arbitrary background structure, or process. The central feature of the probability rule is linearity. In contrast to previous derivations, where linearity was assumed [17,20], here we have shown that it can be derived from the assumption of non-contextuality alone 7 .
Our demonstration that the state update, or 'collapse' rule can be regarded as non-fundamental offers a new perspective on a variety of foundational questions. In particular, informational interpretations of wavefunction collapse can now be given a rigorous foundation: state update can be viewed as a case of classical probabilistic conditioning. The work here also presents the opportunity to extend no-go theorems for non-contextual hidden variable models to scenarios involving more general causal structures [56].
Finally, a key advantage of the approach presented here is that it does not presuppose any a priori distinction between space-like and time-like separated events. Therefore, it avoids conceptual difficulties associated with the non-covariant nature of the state update rule. It is thus a promising direction for research aimed at developing a fully relativistic version of the formalism that encodes space-time symmetries. 6 That is not to say that one can not apply counterfactual reasoning successfully in the case of single quantum contexts. Indeed, it is only when one extends the requirement to a joint probability over all contexts that such counterfactual reasoning becomes problematic. 7 Obtaining linearity of the probability rule via an extension of Gleason's theorem was suggested, but not proved, in the supplementary methods of [20]. Award (DE170100712). This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. We acknowledge the traditional owners of the land on which the University of Queensland is situated, the Turrbal and Jagera people.

Appendix. Inner product for linear maps
Here we construct the inner product on the space of linear maps and derive its CJ representation. Recall that, given an inner product y f á ñ | on a Hilbert space  and an arbitrary basis that is orthonormal with respect to this product, where we momentarily abandon the Dirac notation and represent explicitly the action of an operator on a vector as Note that the Hilbert-Schmidt inner product does not depend on which basis is used in its definition, as long as it is orthonormal with respect to the underlying Hilbert space inner product.
We move a step further and, based on the Hilbert-Schmidt inner product, define an inner product for the space L A of linear maps. For this purpose, we select a basis of Hermitian matrices for the input space that is orthonormal with respect to the Hilbert-Schmidt product (called Hilbert-Schmidt basis):