On probabilities in quantum mechanics

This is an attempt to clarify certain concepts related to a debate on the interpretation of quantum mechanics, a debate between Andrei Khrennikov on the one side and Blake Stacey and R\"udiger Schack on the other side. Central to this debate is the notion of quantum probabilities. I first take up the probability concept in the QBist school, and then give my own arguments for the Born formula for calculating quantum probabilities. In that connection I also sketch some consequences of my approach towards the foundation and interpretation of quantum theory. I discuss my general views on QBism as a possible alternative interpretation before I give some final remarks.


Introduction
The current discussions on the foundation and interpretation of quantum mechanics may be rather intense.Quantum probabilities, as calculated by the Born formula, are central in many of these discussions.As a background for this, it may be useful to look at the various derivations of the Born formula; see Campanella et al. (2020) for some references.My own derivation is given in Helland (2021), and is related to my approach towards quantum foundation, which now has reached its assumed final form in Helland (2024a).This derivation of the Born rule will be repeated below.
This approach to quantum mechanics is a completely new one.It should be looked upon as rather independent of the history of the field, and it is given in a series of articles and books referred to in the reference list below.For the complete mathematical details, I refer again to Helland (2024a).In Helland (2024b) the theory is formulated in an axiomatic way, without giving all the mathematics, and a large set of consequences of the theory is given.
In my opinion, the foundation of quantum mechanics and its interpretation should be sharply linked together.This link was first discussed in Helland (2019).
The present article takes up a special aspect of my theory: My views on quantum probabilities, and a derivation of the Born rule from a set of assumptions.
The Born formula is the central formula in quantum mechanics.It gives the basis for calculation of quantum probabilities.
I will first argue that the simplest version of the Born formula for probabilities in quantum mechanics holds under the following conditions: 1) There is a fixed (physical) context.
2) We have two really different discrete related maximal accessible variables θ a and θ b in this context, and seek the probability distribution for θ b , given some value of θ a , that is, a pure state involving θ a .
3) The likelihood principle of statistics holds.4) There exist an inaccessible variable φ related to the mind(s) of the relevant observer(s) A with the following properties: a) θ a and θ b are functions of φ ; b) As a model, φ can be imagined to be accessible to the mind of a real or imagined being, seen by A as being superior and perfectly rational in relation to any question involving the variables θ a and θ b .
From these assumptions, the following formula will be derived below, partly following the derivation in Helland (2021).
Here, |ψ a j is the state vector associated with the event θ a = u a j , and |ψ b k is the state vector associated with the event θ b = v b k .From this simple version of the Born rule, other versions can be derived under natural assumptions.
The argument for (1) goes in several steps.The basic notion is that of a theoretical variable, which may be a physical variable, but is also assumed to exist in the mind of an observer, or in the joint minds of a group of communicating observers.Theoretical variables may be accessible, possible to measure accurately, or inaccessible.From a mathematical point of view, I only assume the following: If λ if a theoretical variable, and θ = f (λ ) for some function f , then θ is a theoretical variable.And if λ is accessible, then θ is accessible.(In my book Helland (2021) the theoretical variables were called epistemic variables or e-variables; in some of my articles, I have used the term conceptual variables.I apologize for this confusion.) Define the following partial ordering among the theoretical variables, and also among the accessible theoretical variables: θ is said to be 'less than or equal to' λ if θ = f (λ ) for some function f .A basic assumption behind my theory is: There exists an inaccessible variable φ such that all the accessible variables considered can be seen as functions of φ .This assumption can be easily motivated in simple physical situations.
Using such assumptions together with some specific symmetry assumptions, essentially, the complete quantum formalism is derived in Helland (2024a).It turns out that in the finite-dimensional case, the symmetry assumptions can be dropped, so the Hilbert space theory follows from simple assumptions, the basic one being that there in the given context exist two really different maximal accessible (complementary) theoretical variables.The maximal variables are said to be really different if they are not bijective functions of each other.
The theory in Helland (2024a) in the discrete case can be summurized as follows: Make the assumptions above.Then there exists a Hilbert space H , and all the accessible theoretical variables θ in the situation have Hermitian operators A θ associated with them.The eigenvalues of A θ are the possible values of θ .The accessible variable θ is maximal as such if and only if all eigenvalues are simple.Statevectors can be seen as eigenvectors of some physically meaningful operator.In this sense my axioms (see also Helland, 2024b) imply most elements of the traditional formal quantum theory.
What is left to prove after this, is the Born formula and the Schrödinger equation.The first issue will be approached here, for the second, some arguments are given in Helland (2021).
As a special application of the partial ordering defined above, all accessible variables are dominated by φ .Therefore, using Zorn's Lemma, maximal accessible variables with respect to the partial ordering always exist, variables that are just accessible.Physical examples can easily be given, for instance the spin component of a particle in some given direction.People who do not believe in Zorn's Lemma, which is equivalent to the axiom of choice, may take the existence of maximal accessible variables as a separate axiom.
It should be admitted that all this gives a special version of quantum theory: All pure state vectors are assumed to be eigenvectors of some meaningful physical operator.This implies a restriction of the superposition principle, but one can show that certain entangled states are permitted.On the positive side, this restriction leads to simple discussions of several so-called quantum paradoxes, and the whole approach also seems to give connections to aspects of relativity theory and of quantum field theory, see Helland (2023b) and Helland and Parthaserathy (2024).
As a referee pointed out, my approach is also related to the large field of quantumlike models in areas outside physics.In this connection one should first mention the important work of Andrei Khrennikov and collaborators, see Khrennikov (2023), Haven and Khrennikov (2013, 2016), Ozawa and Khrennikov (2021) and the article collectiion Veloz et al. (2023), where further references can be found.Much of this is related to quantum cognition and quantum decision theory, where there is a large recent literature, see Pothos and Busemeyer (2022) and references there.
In Ozawa and Khrennikov (2021) the notion of quantum instrument is defined, and applied to study the order effect in a particular quantum-like model.In the finitedimensional case, an instrument may be defined trough a set of matrices {M xi } satisfying ∑ xi M † xi M xi = I, where x denotes the observed outcome of a measurement, and this gives a more general version of the Born formula.This notion is also studied in other articles, including Barndorff-Nielsen et al. (2003), where it implies interesting connection to statistical inference.In the present article I want to limit myself to the simplest possible versions of the Born rule, so quantum instruments are not introduced.
The motivation for writing (a shorter version of) the present article, was a debate between Khrennikov (2024) and two physicists that defined themself as QBists, arguing for a special quantum interpretation of quantum probabilities called QBism, founded by Chris Fuchs; see Caves et al. (2002) and DeBrota et al. (2020a,b,2021).Briefly, the QBists, at least in important part of the literature, regard themselves as subjective Bayesians, a philosophy which in the statistical literature is associated with de Finetti (1972) and Savage (1972).There are very many variants of Bayesianism; as discussed by for instance Good (1983) andvon Mises (1981).They are all related to Bayes' formula, which involves prior probabilities, but the basis of these priors is discussed by many authors.A modern account of Bayesianism is given by Bernardo and Smith (2009).Very few statisticians are complete subjective Bayesians today.For a diametric opposite foundation of statistical inference, not involving priors at all, see Schweder and Hjort (2002).
Taking a Theorem by Ozawa (2019) as his point of departure, Khrennikov (2024) recently criticised the QBism interpretation of quantum mechanics.Based on an earlier version of Khrennikov's paper, this critque has been countered by Stacey (2023) and Schack (2023).The discussion is centered around the probability concept.One main purpose of this article is to look at this discussion as seen from my own standpoint.I should also mention that this discussion is taken up from a slightly different point of view in Zwirn (2024).

Quantum probablities according to the QBists
As a new interpretation of quantum theory, QBism was founded by Chris Fuchs more than 10 years ago, and has since then been developed by a number of authors.Some of the most important articles on the theory are given in the references of Stacey (2023).As in that article, I will not go into details of the theory, but concentrate on the probability concept.Here is a citation from op. cit.: 'According to this school of thought, a probability for an event is nothing more or less than a gambling commitment, a valuation by a specific agent of how much that agent would stake on that event occurring.'He continues by referring to Khennikov's introduction of certain mathematical entities to describe the situation where two remote observers measure the same variable: Operators A, M 1 and M 2 , a state vector |ψ for the system and another |ξ for the environment (Khennikov uses |ξ 1 and |ξ 2 for the states of the measurement apparata), plus a one-parameter famlly of unitaries U(t) to represent time evolution.
'Any probability extracted from combining these quantities is necessarily, just like any other probability in personalist Bayesianism, the possession of the agent who commits to it.So there is no way to mix the ingredients A, M 1 , M 2 and so forth to arrive at a conclusion that the personal experiences of two agents will always agree, or that they will always disagree, or anything in between.'I will approach these statement, and any other statements made by physicists, from the point of view of a statistician.The whole science of statistics is built upon probabilies.Bayesianism is one school within statistics, but there are also other schools.As mentioned in the introduction, there are many versions of Bayesianism.The QBists rely on a personalist or subjective version.
As a side remark, statistics is a science that can be explained to intelligent people using fairly everyday terms.One of my own goals is that some day we will be able to do the same with quantum physics.Helland (2024b) is a beginning.
So concentrate first on 'a probability for an event is nothing more or less than a gambling commitment'.Taken to its extreme, QBists seem to think in terms of gambling all the time; every time we make a decision we make an internal bet.I will claim that ordinary people neither think nor act in this way.We go through life making decision after decision, and very rarely we think in terms of gambling when we make these decisions.In my opinion, so also when we make statements of probabilities of events.
As a referee points out, QBists talk about gambling in some generalized sense, not necessary expressible in monetary terms.This is true.I agree with much of the QBist philosophy, but not all.I particuler, we agree that a quantum state is a state of knowledge.In my theory, this knowledge is attached to an observer/agent or to a communicating group of agents.
A major difference between my views and the QBist views on probability, see Caves et al. (2002), is that the QBists rely on a Dutch book argument connected to the agent himself, while I only assume the the agent has ideals that he think of as rational, as made precise by a Dutch book argument.
Classically, the concept of probability may have many foundations.Probabilities may be based upon symmetries, like when throwing a die or evaluating an opinion poll, it can be based upon subjective judgement, or it can be based on much data and long experience, like when a meteorologist makes a probability statement.Only in the subjective case, it is possible at all to talk about some internal gambling procedure.I will claim that even in that case, people tend to make probability statements without having any bets in mind.
In Caves et al. (2002), arguments for quantum probabilities are given by using a Dutch book argument for an agent, de Finetti's representation theorem for mixed quantum states, and Gleason's theorem.The weak point, as I see it, is the assumption that the agent is rational, as expressed by a Dutch book argument.
We are in no way always rational when making our decisions.The decisions are often made by using an intuition that has been formed by a long history, based on experinces and contact with other people.During this process, we have our limitations, as described from a quantum theory background in Helland (2022bHelland ( , 2023d)).
So, in my view, even in the case of quantum probabilities, the QBist probability concept is unsatisfactory.For quantum probabilities, we have to take a closer look at the background for the Born theorem.The QBists' view on the Born rule will be discussed later.My own background for this rule is described below, So, in a number of Sections I will now give my arguments.They are based upon my point of view as a statistician.One basic goal of my research has been to try to build a bridge between the statistical culture and the quantum mechanical culture, and a part of this goal has been to find a foundation of quantum theory that can be explained to scientists like statisticians.This is very difficult with the existing formalism, but I think it is easier using my alternative foundation.
I will come back to the QBist views of probabilities in Sections 8 and 9.
The likrlihood of the data is defined as L(θ |z) = p(z|θ ), the probability function seen as a function of the parameter.
Statistical inference builds upon several principles.One of these is the likelihood principle..The principle says roughly that all relevant information in some experiment is contained in this likelihood.The principle can be derived from other principles; see Helland (2021), but it can also be argued for independently.A version of the likelihood principle will be used below as a part of the motivation behind Born's formula.
The Generalized Likelihood Principle Consider two experiments with equivalent contexts τ, and assume that θ is the same full parameter in both experiments.Suppose that two observations z * 1 and z * 2 have proportional likelihoods in the two experiments, where the proportionality constant c is independent of θ .Then these two observations produce the same experimental evidence on θ in this context.
The term 'experimental evidence' is here left undefined, and can be specified in any desirable direction; see a closer discussion in Helland (2021).
Two contexts are said to be equivalent if one can establish a one-to-one function between all variables involved.It is important for my development that the context is kept fixed.This aspect makes the generalized likelihood principle weaker than the principle as formulated in the literature, in particular in Berger and Wolpert (1988).
Paradoxes like what the ordinary likelihood principle seems to imply in the following situation are avoided.
Example Suppose that s 1 , s 2 , . . .are independent, identically (iid) distributed variables with P(s = 1) = θ and P(s = 0) = 1 − θ , in statistical language, iid Bernoulli variables with parameter θ .In experiment E 1 , a fixed sample size of ten observations is decided upon, and the important summary observation (in statistical language the sufficient statistics) t 1 = ∑ 10 i=1 s i turns out to be t 1 = 8.In experiment E 2 , it is decided to take observations until a total of 2 zeroes has been observed.Then assume that the sufficient statistics t 2 = ∑ s i also turns out to take the value 8.The two likelihoods are proportional, but the contexts are different, so the intuition that the two experiments may lead to different inference on θ is supported by my version of the likelihood principle.For further discussion of this example, see Berger and Wolpert (1988) and references there.
The introduction of a context makes my formulation of the likelihood principle far less controversial than the ordinary formulation.According to the ordinary principle, the way data are obtained is irrelevant to inference; all information is contained in the likelihood.Thus sampling plans, randomization procedures, and stopping rules are irrelevant according to a common interpretation of the ordinary principle.Furthermore, common frequentist concepts like bias, confidence coefficients, levels and powers of statistical tests, etc., are irrelevant, as they depend on the sample space, not only on the observed observations.In my formulation, all these concepts are related to the context.Also Bayesian priors, if needed, are contained in the context.Maximum likelihood estimation can not be derived from the likelihood principle, but is obviously permissible as a method of obtaining reasonable proposals for estimates in general.
An important special case of the generalized likelihood principle is when the proportionality constant c is equal to 1. Then the two observations z * 1 and z * 2 are assumed to have equal likelihoods.Again an important special case is when the two experiments are identical.A consequence of the generalized likelihood principle is then that all experimental evidence, given the context, is a function of the likelihood of the experiment, i.e, is contained in the likelihood function.
From this point of view the situation is similar in quantum mechanics as in in ordinary statistics.Here, in a given situation, we may have a model for the data z depending upon the context τ and the theoretical variable of interest θ , expressed by a point probability or probability density p(z|τ, θ ).Thus, even though θ may be discrete, from a statistical point of view it acts as a parameter in the model.An eventual extra parameter η in such a model will be assumed known from earlier experiments of the same type, and may be included in the context.This gives a unique likelihood L(θ |z, τ) = p(z|τ, θ ).And also in this situation the relevant discussion in Helland ( 2021) seems to imply that the generalized likelihood principle above holds true.

The focused likelihood principle
In this section I assume a discrete quantum formulation.One can for instance think of a spin component in a fixed direction to be determined.I assume a measurement situation where the data contains some noise, hence a likelihood for the discrete parameter θ , given data z as L(θ |z) = p(z|θ ), where p is the probability density or point probability of the data.
Assume now that the quantum mechanical system is prepared in some state and that we want to do an experiment related to the unknown theoretical variable θ b .Given then the focused question b, the theoretical variable θ b plays the role similar to a parameter in statistical inference.Inference can be done by preparing many independent units in the same state.Inference is then made from data z b .All inference theory that one finds in standard statistical texts like Lehmann and Casella (1998) applies.In particular, the concepts of unbiasedness, equivariance, average risk optimality, minimaxity and admissibility apply.None of these concepts are much discussed in the physical literature, first because measurements there are often considered as perfect, at least in elementary texts, secondly because, when measurements are considered in the physical literature, they are mostly discussed in other terms.
Whatever kind of inference we make on θ b , we can take as a point of departure the statistical model and the generalized likelihood principle of the previous Section.Hence after an experiment is done, and given some context τ, all evidence on θ b is contained in the likelihood p(z b |τ, θ b ), where z b is the data relevant for inference on θ b , also assumed discrete.This is summarized in the likelihood effect: where the pure state |b; j corresponds to the event θ b = u b j .
Interpretation of the likelihood effect F b (z b , τ): (1) We have posed some inference question on the accessible theoretical variable.θ b .(2) We have specified the relevant likelihood for the data.The question itself and the likelihood for all possible answers of the question, formulated in terms of state vectors, can be recovered from the likelihood effect.
The likelihood effect is closely connected to the concept of an operator-valued measure POVM; see a discussion in Helland (2021).Since the focused question assumes discrete data, each likelihood is in the range 0 ≤ p ≤ 1.In the quantum mechanical literature, an effect is any operator with eigenvalues in the range [0, 1].Some qualifications must be made relative to the above interpretation, however, if we want to be precise.We have the freedom to redefine the theoretical variable in the case of coinciding eigenvalues in the likelihood effect, that is, if for some j, l.An extreme case is the likelihood effect F(u b ; z b , τ) = I, where all the likelihoods are 1, that is, the probability of z is 1 under any considered model.One could have defined the likelihood effect from appropriate eigenvalue spaces, but for the following mathematical result, the definition (2) is convenient.
We have the following result on the likelihood effects: Proposition 1 Let two experiments b and c be given together with two data points z b and z c of these experiments.Assume that b and c are such that (3) Then we can order the states such that On the other hand, if (1) and ( 2) are satisfied, then (3) holds.
The last part is fairly trivial.The direct part is proved in Appendix 1.Return now to the generalized likelihood principle of the previous Section.Recall that this principle is fairly reasonable in our setting, where we condition upon the context τ.In statistics, the likelihood principle says the following: If two experiments have proportional likelihood, with constant of proportionality independent of the parameter, they produce the same experimental evidence about the parameter.Here experimental evidence is left undefined.In my approach towards quantum mechanics, where one focuses on a specific question, we must in addition demand that this focused question is the same; that is, the set of corresponding projections must be the same.
The following principle follows: The Focused Generalized Likelihood Principle (FGLP) Consider two potential experiments b and c in some setting with equivalent contexts τ, and assume that the inaccessible theoretical variable φ is the same in both experiments.Suppose that the two observations z b 1 and z c 2 have equal likelihood effects in the two experiments.Then (A) The questions posed in the two experiments are equivalent in the sense that one can use the same Hilbert space H to describe the results of the experiments, and that the corresponding set of eigenvector spaces (i.e., the orthogonal resolution of the identity) are equal.This implies a one-to-one relation between the theoretical variables θ b and θ c .
(B) The two observations produce equivalent experimental evidence on the relevant theoretical variables in this context and given this question.
Proposition 2 The focused generalized likelihood principle follows from the generalized likelihood principle.
Proof.The Proposition is trivial if we know that the theoretical variables are the same in the two experiments.If not, we have a situation where Eq. ( 3 where the equality of the projection operators after a suitable ordering follows from Proposition 1, (2).The eigenvalues u b k are all different, similarly the eigenvalues u c k .These are the answers to the questions connected to θ b and θ c ; the questions are equivalent since the set of projection operators coincide.The conclusion (B) follows from Proposition 1, (1) and the ordinary (generalized) likelihood principle.
Remark: Strictly speaking, this proof uses the assumption that p(z b |τ, , and similarly for z c .This is in agreement with the arbitrariness of θ b in relation to F b discussed above.Below, in the proof of Born's formula, I will use FGLP in the very special case of a perfect experiments, where observations can be taken to be equal to parameter values, and there this assumption is trivial.
Two contexts are considered equivalent if they are one-to-one functions of each other.The principle FGLP says that both the question posed and the experimental evidence are functions of the likelihood effect and the context of the experiment.

Rationality and experimental evidence
Throughout this Section and the next one, I will consider a fixed context τ and a fixed epistemic setting in this context.The inaccessible theoretical variable is φ , and I assume that the accessible theoretical variables θ b take a discrete set of values.Let the data behind the potential experiment connected to θ b be z b , also assumed to take a discrete set of values.I will assume throughout this Section and the next one that an experimentalist B is in such a context partly determined by the fact that he previously has performed an perfect experiment connected to a maximally accessible theoretical variable θ a and obtained the answer θ a = u k , so that his state can be described by a Hilbert space H and a vector |a; k in H .For the background for such assumption, see Helland (2024a,b), where the basic condition is that there, in connection to an agent or to a communicating group of agents in a given context, exist two, really different maximal accessible (complementary) variables.
So let a single agent B be in this situation, and let all theoretical variables be attached to B, although he also has the possibility to receiving information from others through part of the context τ.He has the choice of doing different experiments b, and he also has the choice of choosing different models for his experiment through his likelihood p B (z b |τ, θ b ).The experiment and the model, hence the likelihood, should be chosen before the data are obtained.All these choices are summarized in the likelihood effect F b , a function of the at present unknown data z b .For use after the experiment, he should also choose a good estimator θ b , and he may also have to choose some loss function, but the principles behind these latter choices will be considered as part of the context τ.
If B chooses to do a Bayesian analysis, the estimator should be based on a prior π B (θ b |τ).We assume that he is trying to be as rational as possible in all his choices, and that this rationality is connected to his loss function or to other criteria.What should be meant by experimental evidence, and how should it be measured?As a natural choice then, let the experimental evidence that we are seeking, be the posterior probability for some fixed value of θ b , given the data.From the agent B's point of view this is given by: , assuming the likelihood chosen by B and B's prior π B for θ b .Some Bayesians claim that their own philosophy is the only one which is consistent with the likelihood principle.For my own view on this, see Helland (2021).In a non-Bayesian analysis, we can let the concept of experimental evidence be tied to the confidence distribution, given the context, see Schweder and Hjort (2002).(Another non-Bayesian analysis is fiducial inference; see Hannig et al., 2016).This may also give a conclusion in terms of a probability, in this case an epistemic probability, a concept that is further discussed in Helland (2021).Also in such an analysis we must assume B to be as rational as possible.
In any case we fix b and j from now on, and take q = q b j = P(θ b = u b j |data), an epistemic probability.
I have to make precise in some way what is meant by the rationality of the experimentalist B. He has to make many difficult choices on the basis of uncertain knowledge.His actions can partly be based on intuition, partly on experience from similar situations, partly on a common scientific culture and partly on advices from other persons.These other persons will in turn have their intuition, their experience and their scientific education.Often B will have certain explicitly formulated principles on which to base his decisions, but sometimes he may have to dispense with some of the principles.In the latter case, he has to rely on some 'inner voice', a conviction which tells him what to do.
So in the case where B can not himself be seen as a perfectly rational Bayesian, a case that I will concentrate on below, I will formalize this by introducing a perfectly rational superior actor D, to which all these principles, experiences and convictions can be related.I will assume that B in his actions is inspired by D, so in this sense, D has some influence on B's decisions.I may assume that D has priors, so that he can do a Bayesian analysis.These priors can be based on symmetry considerations, but also on other considerations.The experimental evidence will then be defined as the aposteriori probability of the variable θ b from D's point of view, say the probability q that θ b takes some fixed value u b j , given the data.By the FGLP this must again be a function of the likelihood effect F b .
under the assumption that the experiment connected to some variable θ b is to be done.Alternatively, I may also asume that D is a frequentist, and that he has epistemic probabilities connected to θ b as found from a confidence distribution; see Schweder and Hjort (2002).Again, by the focused likelihood principle, these epistemic probabilities must be of the form (4).
In any case, for the derivation of Born's formula below, I will make one more crucial assumption: q as defined here gives the real probability that θ b takes the fixed value u b j in the given context.The superior actor D represents the scientific ideals of the experimentalist B, and my main point is that D should be perfectly rational, In this article I have not tried to develop a theory of decisions.In Helland (2023a,c) I have argued that there is a close connection between the foundation of quantum mechanics and quantum decision theory.Here one must be careful, however.Quantum decision theory takes its departure in the ordinary mathematical formulation of quantum theory, in particular in Born's rule for calculating probabilities.Using this theory here, where I am preparing to derive Born's rule, will lead to circular reasoning.Instead I will now take decision as a primitive concept.
An important point is that decisions made by our minds are not the same as straightforward computerlike calculations.Human decisions are based on the functioning of and the interplay between conscious and subconscious processes in the brain.
As said, in a scientific connection we assume that D is perfectly rational.This can be formalized mathematically by considering a hypothetical betting situation for D against a bookie, nature N. A similar discussion was recently done using a more abstract language by Hammond (2011).But note: I do not see any human scientist, including myself, as being perfectly rational in all situations.We can try to be as rational as possible, but we have to rely on some underlyng rational ideals that partly determine our actions.
So let the hypothetical odds of a given bet for D be (1 − q)/q to 1, where q is the probability as defined by (4).This odds specification is a way to make precise that, given the context τ and given the question b, the bettor's probability that the experimental result takes some value, say u b j , is given by q: For a given utility measured by x, the bettor D pays in an amount qx -the stake -to the bookie.After the experiment the bookie pays out an amount x -the payoff -to the bettor if the result of the experiment takes the value θ b = u b j , otherwise nothing is paid.The rationality of D is formulated in terms of The Dutch Book Principle No choice of payoffs in a series of bets shall lead to a sure loss for the bettor.
For a related use of the same principle, see Caves et al. (2002).It is very important that the principle is related to a fixed context; the hypothetical superior bettor D is also bounded by this context.This has also consequences for my derivation of Born's formula below.
It is also important that this whole discussion is limited to a context where the observer B during his decision has just two related maximal variables in his mind.The superior, hypothetical bettor D must then also be related to such a context.This is important in the derivation of Born's formula below, and it is also important when I later generalize to macroscopic decisions.Several recent articles discuss quantum cognition as modeled by quantum probabilities; for a recent review, see Pothos and Busemeyer (2022).In my opinion, such models could be based upon a derivation of Born's formula and a simple model for a person's decisions.But this model should then be limited to decisions between two related maximal variables.The superior, hypothetical actor D must also be seen in this light.
Assumption 1 Consider in the context τ an epistemic setting where the FGLP is satisfied, and the whole situation is observed by an experimentalist B whose decisions are influenced by a superior actor D as described above.Assume that D's probabilities q given by ( 4) are taken as the experimental evidence, and that D can be seen to be rational in agreement with the Dutch book principle.
A situation where Assumption 1 holds will be called a rational epistemic setting.It will be assumed to be implied by essential situations of quantum mechanics.Later I will discuss whether or not it also can be coupled to certain macroscopic situations.
Theorem 1 Assume a rational epistemic setting, and assume a fixed context τ.Let F 1 and F 2 be two likelihood effects in this setting, and assume that F 1 + F 2 also is an effect.Then the experimental evidences, taken as the epistemic probabilities related to the data of the performed experiments, satisfy Proof.The result of the theorem is obvious, without making Assumption 1, if F 1 and F 2 are likelihood effects connected to experiments on the same variable θ b .We will prove it in general.Consider then two experiments 1 and 2, 1 with variable θ b and likelihood effect F 1 , and 2 with theoretical variable θ c and likelihood effect F 2 , and let B choose between these two experiments.Let z be the data of the chosen experiment, and let the corresponding posterior probabilities /confidence probabilities connected to D be q 1 = P(θ b = u b j |z, τ) = q(F 1 |τ) and q 2 = P(θ c = u c k |z, τ) = q(F 2 |τ) for some j and k.
Assume that B does a randomized choice: With an unbiased coin he makes the choice betwen the experiments 1 or 2. The likelihood effect connected to the whole experiment, including the coin toss, is (F 1 + F 2 )/2.Let q 0 be the posterior probability / confidence probability for D connected to this full experiment, including the randomization.One might perhaps argue tentatively at once that q 0 must be equal to (q 1 + q 2 )/2, but I will show in detail that this follows from the Dutch Book Principle.
Let D make his bets, one for the experiment 1, one for the experiment 2, and one for the full randomized experiment.Let the corresponding payoffs chosen by Nature be x 1 , x 2 and x 0 .Imagine that this, including the randomization, is repeated a large number of times.
If experiment 1 occurs in the randomization, the payoff for the randomized experiment is replaced by the expected payoff x 0 /2, similarly if experiment 2 occurs.The net expected amount the bettor receives is then This conclusion may be drawn from many repeated experiments.The payoffs (x 1 , x 2 , x 0 ) can be chosen by nature N in such a way that it leads to sure loss for the bettor D if not the determinant of this system is zero: Thus we must have If F 1 +F 2 is an effect, the common factor 1 2 can be removed by changing the likelihoods, and the result follows.
Corollary 1 Assume a rational epistemic setting in the context τ.Let F 1 , F 2 , . . .be likelihood effects in this setting, and assume that F 1 + F 2 + . . .also is an effect.Then Proof.The finite case follows immediately from Theorem 1. Then the infinite case follows from monotone convergence.
The result of this Section is quite general.In particular the loss function and any other criterion for the success of the experiments are arbitrary.So far I have assumed that the choice of experiment b is given, which implies that it is the same for B and for D. However, the result also applies to the following different situation: Let B have some definite purpose for his experiment, and to achieve that purpose, he has to choose the question b in a clever manner, as rationally as he can.Assume that this rationality is formalized through the actor D, who has the ideal likelihood effect F and the experimental evidence q(F|τ).If two such questions can be chosen, the result of Theorem 1 holds, with essentially the same proof.
5 The Born formula

The basic formula
Born's formula is the basis for all probability calculations in quantum mechanics.In textbooks it is usually stated as a separate axiom, but it has also been argued for by using various sets of assumptions; see Helland (2008) andCampanella et al. (2020) for some references.In fact, the first argument for the Born formula, assuming that there is an affine mapping from set of density functions to the corresponding probability functions, is due to von Neumann (1927); see Busch et al. (2016).In Helland (2006), Helland (2008) and Helland (2010) the formula was proved under rather strong assumptions.Here I will use assumptions which are as weak as possible; I will base the discussion upon the result of the previuos Sections.
I begin with a very elegant recent theorem by Busch (2003).For completeness I reproduce the proof for the finite-dimensional case in Appendix 2.
Let in general H be any separable Hilbert space.Recall that an effect F is any operator on the Hilbert space with eigenvalues in the range [0, 1].A generalized probability measure µ is a function on the effects with the properties Theorem 2 (Busch, 2003) Any generalized probability measure µ is of the form µ(F) = trace(ρF) for some density operator ρ.
It is now easy to see that q(F|τ) on the likelihood effects of the previous Section is a generalized probability measure if Assumption 1 holds: (1) follows since q is a probability; (2) since F = I implies that the likelihood is 1 for all values of the theoretical variable; finally (3) is a consequence of the corollary of Theorem 1. Hence there is a density operator ρ = ρ(τ) such that p(z|τ) = trace(ρ(τ)F) for all ideal likelihood effects F = F(z).This is a result which is valid for all experiments.
The problem of defining a generalized probability on the set of effects is also discussed in Busch et al. (2016).
Define now a perfect experiment as one where the measurement uncertainty can be disregarded.The quantum mechanical literature operates very much with perfect experiments which result in well-defined states | j .From the point of view of statistics, if, say the 99% confidence or credibility region of θ b is the single point u b j , we can infer approximately that a perfect experiment has given the result θ b = u b j .In our epistemic setting then: We have asked the question: 'What is the value of the accessible variable θ b ?', and are interested in finding the probability of the answer θ b = u b j though a perfect experiment.If u b j is a non-degenerate eigenvalue of the operator corresponding to θ b , this is the probability of a well-defined state |b; j .Assume now that this probability is sought in a setting defined as follows: We have previous knowledge of the answer θ a = u a k of another maximal question: 'What is the value of θ a ?'That is, we know the state |a; k .(u a k is non-degenerate.)These two experiments, the one leading to |a; k and the one leading to |b; j , are assumed to be performed in equivalent contexts τ.
Theorem 3 [Born's formula] Assume a rational epistemic setting.In the above situation we have: Proof.By the theory in Helland (2024a), both the variable θ a and the variable θ b have operators with non-degenerate eigenvalues.Fix j and k, let |v be either |a; k or |b; j , and consider likelihood effects of the form F = |v v|.This corresponds in both cases to a perfect measurement of a maximally accessible parameter with a definite result.By Theorem 2 above there exists a density operator ρ a,k = ∑ i π i (τ a,k )|i i| such that q(F|τ a,k ) = v|ρ a,k |v , where π i (τ a,k ) are non-negative constants adding to 1. Consider first |v = |a; k .For this case one must have This implies for each i that either π i (τ a,k ) = 0 or | i|a; k | = 1.Since the last condition implies |i = |a; k (modulus an irrelevant phase factor), and this is a condition which can only be true for one i, it follows that π i (τ a,k ) = 0 for all other i than this one, and that π i (τ a,k ) = 1 for this particular i.Summarizing this, we get ρ a,k = |a; k a; k|, and setting |v = |b; j , Born's formula follows, since q(F|τ a,k ) in this case is equal to the probability of the perfect result θ b = u b j .

Consequences
Here are three easy consequences of Born's formula: 1.If the context of the system is given by the state |a; k , and A b is the operator corresponding to the variable θ b , then the expected value of a perfect measurement of θ b is a; k|A b |a; k .
2. If the context is given by a density operator ρ, and A is the operator corresponding to the variable θ , then the expected value of a perfect measurement of θ is trace(ρA).
3. In the same situation the expected value of a perfect measurement of f (θ ) is trace(ρ f (A)).
Proof of 1.
A consequence of 3. above is that θ = θ b does not need to be maximal in order that a Born formula should be valid; see also below.
As an application of Born's formula, we give the transition probabilities for electron spin.For a given direction a, define the variable θ a as +1 if the measured spin com- ponent by a perfect measurement for the electron is +h/2 in this direction, θ a = −1 if the component is −h/2.Assume that a and b are two directions in which the spin component can be measured.
Proposition 3 For the qubit spin components we have This is proved in several textbooks, for instance Holevo (2001), from Born's formula.A similar proof using the Pauli spin matrices is also given in Helland (2010).

Perfect measurements
Measurements of theoretical variables is discussed in Helland (2021), here I will look at the case of a perfect measurement.Assume that we know the state |ψ of a system, and that we want to measure a new variable θ b .This can be discussed by means of the projection operators Π b j = |b; j b; j|.First observe that by a simple calculation from Born's formula 2017) recently simultaneously derived both the Born rule and the well-known collapse rule from a knowledge-based perspective.I say more about the collapse rule in Helland (2021), but in this article I will just assume this derivation as given.Then, after a perfect measurement θ b = u b j has been obtained, the state changes to Successive measurements are often of interest.We find In the case with multiple eigenvalues, the formulae above are still valid, but the projectors Π b j above must be replaced by projectors upon eigenspaces.One can show that (6) then gives a precise version of Born's rule for this case.
Proof.Look first at the case with unique eigenvalues.Then Born's rule says Let then the eigenvalues move towards coincidence.Let Then by continuity from the previous equation we get Note that in general P(θ b = u b j and then θ c = u c i |ψ) = P(θ c = u c i and then θ b = u b j |ψ).Measurements do not necessarily commute.

Generalizations
Using a suitable projection, the formula can be generalized to the case where also the accessible variables θ a is not necessarily maximal.There is also a variant for a mixed state involving θ a .First, define the mixed state associated with any accessible variable θ .We need the assumption that there exists a maximal accessible variable η such that θ = f (η) and such that each distribition of η, given some θ = u, is uniform.Furthermore some probability distribution of θ is assumed.Let Π u be the projection of the operator of θ upon the eigenspace associated with θ = u.Then define the mixed state operator where |ψ i is the state vector associated with the event η = v i for the maximal variable η.
From this, we can easily show from (1) (assuming that the maximal η a corresponding to θ a also is a function of φ ) that in general with an obvious meaning given to the projection Π b v .
An important observation is that this result is not necessarily associated with a microscopic situation.The result can also be generalized to continuous theoretical variables by first approximating them by discrete ones.For continuous variables, Born's formula is most easily stated on the form Note again that we in this formula do not assume that the accessible variable θ b is maximal.Hence a corresponding formula is also valid for any function of θ b , for instance exp(iθ b x) for some fixed x.The operator corresponding to a function of θ b can be found from the spectral theorem.From this, the probability distribution of θ b , given the information in ρ a , can be recovered.
I can also generalize to the case where the final measurement is not necessarily perfect.Let us assume future data z b instead of a perfect theoretical variable θ b .Strictly speaking, for this case the focused likelihood principle is still valid under the following condition: p(z b |θ b = u j ) = p(z b |θ b = u k ) implies u j = u k .This will not be needed here.We can define an operator corresponding to z b by and, conditioning upon the events θ b = u j and following version (9) of the Born for- mula, we obtain p(z b |ρ a ) = trace(ρ a B z b ). (12) 8 Intersubjectivity and QBism Consider two remote observers O 1 and O 2 who perform joint measurments on a system S .Let their observations at time t be θ 1 (t) and θ 2 (t), and let these correspond to operators M 1 (t) and M 2 (t).Khrennikov (2024) considers this situation, and assumes that [M 1 (t), M 2 (t)] = 0. Is this possible?
In my terminology it is only possible if θ 1 (t) and θ 2 (t), in some sense can be given meaning at the same time, are both accessible.They can then not each be maximal, but one can imagine a situation where the vector (θ 1 (t), θ 2 (t)) is maximal.At least it has to be accessible to some agent.This agent can be a third observer, observing both O 1 and O 2 .
Khrennikov then refers to a Theorem due to Ozawa (2019): Two observers performing the joint local and probabiliy reproducible measurements of the same observable A on the system S should get the same outcome with probability 1.He says that this challenges QBism.
This last challenge is met by Schack (2023).His arguments are based on the quantum formalism, and the QBist interpretation of this formalism.I will not here go into his detailed mathematics, but only his interpretation of this mathematics.Here are two citations: 'The quantum formalism is a tool that any agent can use to optimize their choice of actions.' 'The quantum formalism does not describe nature in absence of agent, but instead is normative, i.e., answers the question of how one should act.'I agree completely that quantum probabilities should be attached to an agent (or to a communicating group of agents), but here the agreement stops.First, I will allow any agent, not only one that is familiar with the quantum formalism.Next, I look upon quantum probabilities as descriptive, not normative.This is also my background for interpreting Ozawa's Theorem.
Here is a citation from Section 3 in Schack's article: 'As a mathematical result, Ozawa'stheorem says nothing about intersubjectivity or different observers.To arrive at their interpretation, both Ozawa and Khrennikov have to make the additional assumption that their scenario -two different observers interacting with a system followed by measurements on the meters -describes two different observers observers measuring the same system observable.' He then goes on arguing that this assumption is incompatible with QBism.He says that from a QBist perspective, Ozawa's Theorem is about measurements that a single agent, say, Eve, contemplates performing on a system and two meters.The assumption that the theorem is about measurement results of two different observers violates QBism's key tenet that the quantum formalism should be viewed as a singleagent theory.
If this is the case, I disagree with the main basis of QBism.In my view, one can well imagine two observers O 1 and O 2 measuring the same system.But then it must be done in such a way that the vector of results (θ 1 , θ 2 ) is accessible to some agent.What does this mean?As I see it, it means that a third observer -you may well call her Evemay be able to observe O 1 and O 2 during their measurements, and then able to record their results all the time.So one can well regard quantum mechanics as a single agent theory.From the point of Eve here, it can be taken to describe what she observes.
However, note that my basic mathematical theory (Helland, 2024 a,b) can be interpreted in two directions.It can be seen as a single agent theory, but it can also be seen as a theory of the joint minds of a group of communicating actors, where their communication involves the relevant theoretical variables.Zwirn (2024), in his discussion of the Khrennikov/QBists articles, considers intersubjectivity from the point of view of Convivial Solipsism.ConSol describes the interaction of two observers that are not initially communicating.What one observer then learns about the other observer's values, are seen as a measurement.Inside Con-Sol, everything is relative to one unique observer.I can agree with this if the 'unique observer' also includes groups of observers that initially have communicated on everything that is relevant.Each such 'unique observer' will have his or her own perspectival reality.In this article I will not go into this in detail.I will only say that I in large terms agree with both ConSol and QBism when it concerns how an agent learns a new result, and then updates his or her own state.To me, a pure state always belongs to a 'unique observer', and it can always be seen as the result of asking a maximal question to nature or to another agent, and then obtaining a definite answer.Depending on the situation, this answer may be known, or it can just be the expectation of an agent.I see the state concept as useful also in the last case.
So, answering a question raised by a referee, I agree with the QBist statement that different measurers may assign different (pure) states to a system.However, I see the whole of quantum theory as relative to a fixed inaccessible variable φ , varying on a space Ω φ .In simple physical systems, such a φ may easily be found, but, dependent on our philosophy, the general interpretation of φ may vary.
Taking for simplicity φ to be discrete, and assiming that for some higher being called God, φ takes some value u, this value describes a very abstract pure state |ψ , which could be called 'the wave function of the world'.Personally, I see this concept as being less fruitful.Neither u nor |ψ could be observed by any human observer.I look upon φ as a variable, an inaccessible one, and the values that it may take, are just hypothetical.

Interpretation and foundation of quantum mechanics
Unfortunately, there are many different, mutually incompatible interpretations of quantum mechanics.The relevant Wikipedia article mentions 16 different interpretations.QBism is one of them.There is a large literature on QBism, some of it referred to in Schack's article.I agree of much that is written in this literature, but, as stated in the previuos sections, I disagree with their views on quantum probabilities.
The question of when one shall use classical probabilities and when one shall use quantum probabilities, is crucial; see Pothos and Busemeyer (2023) and references there, and from a statistical point of view, an example in Subsection 5.6.4 in Helland (2021).From a QBist point of view, this question is taken up in DeBrota et al. (2020a,b).The crucial argument is at the beginning of DeBrota et al. (2020a): Assume a POVM {D j } and a mixed quantum state ρ belonging to some agent.Then the Born rule gives probabilities Q(D j ) = trρD j for the outcomes of the agent's measurement.The main concept of the papers are minimally complete POVMs (MICs).These sets of operators form a bases for the vector space of Hermitian operators and lead to probability distributions with fewest number of entries necessary for reconstructing the quantum state.
The argument runs as follows: In addition to the actual measurements assume 'measurements in the sky' {H i }.Let P(H i ) be their probabilities, and P(D j |H i ) the condition probabilities, given H i for subsequent measurements of {D j }.Furthermore, let {σ j } be a basis in the state space, and define the matrix Φ by its inverse [Φ −1 ] i j = trH i σ j .Then in op.cit the following basic quantum law in matrix notation is shown: This is seen as a quantum version of the law of total probability.Note that while P(H), P(D|H), and Q(D) are probabilities, ΦP(H) often is not.This is the basis for studying MICs defined by {H i }.In particular, the symmetric IC POVM (SIC) is a MIC for which all the H i s are of rank 1.The existence of SICs of all dimensions is an open mathematical problem.In DeBrota et al. (2020b), the MICs are studied in detail, and it is shown how these MICs illuminate the structure of quantum theory and how it departs from the classical.
All this are interesting investigations.There is a SIC version of the Born rule, and this SIC solution is shown to be optimal in at least two different ways.But note that this all depends on the 'measurements in the sky' giving H i s as described above.One can also refer to DeBrota et al. (2021), where the QBist version of the Born rule is given explicitly, and where detailed arguments are given.
My claim is that this does not give the most general version of the Born rule.For this, I refer to the derivation given above, where no such H i s are assumed.The QBist construction is connected to subjective beliefs, and the P(H i )s are subjective probabilities.
To me, the basic concepts are not only belief or knowledge; it is that of decisions, decisions whose theory can be derived from a fundamental theory based on theoretical variables.You can decide to measure the spin of a silver atom in the x-direction or in the z-direction.As the result of a (precise) measurement you will get different state vectors in the two cases.To me, it is less important whether this is a state of belief or a state of knowledge.The main thing is that you have decided upon a measurement direction and then measured.
This way of thinking carries over to daily life situation.We go through life taking decision after decision, making choice after choice.Each decision is intuitive, it can be based upon our beliefs, or our knowledge, or both.It is interesting that quantum-like models now are beginning to be applied in decision-contexts and in related macroscopic contexts, say in psychology and economics; see the work of Andrei Khrennikov and collaborators, and also independent works in quantum decision theory.In Helland (2023a) I try to connect this to my basic approach to quantum foundation.
As I see it, any quantum interpretation should be coupled to a quantum foundation.My views on the quantum foundation are now described in Helland (2024a,b).This naturally leads to what I call a general epistemic interpretation of the theory.It is based upon theoretical variables that are connected to an agent or to a group of communicating agents in some fixed context.Some of these variables are accessible to the agent, others are inaccessible.My first main theorem states that in a situation with two different accessible variables that in this sense are maximal to the agent, there can be defined a Hilbert space H such that all accessible variables are associated with self-adjoint operators in H.The eigenvalues of an operator A coincide with the possible values of the associated variable.An accessible variable is maximal if and only if the associated operator has only one-dimensional eigenspaces.
In general, these results require some symmetry assumptions, but in the discrete case, it seems as if these symmetry assumptions can be dispensed with, see Helland (2024a).In the discrete case, a pure state may be identified with a question involving a maximal accessible variable together with a sharp answer to this question.Of course this variable may be a vector, implying several (commuting) partial questions.
In addition to these basic results, I need arguments for the Born formula and for the Schrödinger equation.Both issues are addressed in Helland (2021).My assumptions behind several versions of Born's formula are given above.
What are the prices payed for all this?First, some simple axioms are to be assumed.Most of them are rather obvious, but one should be mentioned: There exists an inaccessible variable φ such that all accessible variables are functions of φ .In several physical examples, φ can easily be constructed.One can also discuss purely statistical applications.As a very general axiom, valid for all agents in all possible situations, one can take several points of view; one option is to argue for a religious perspective, see Helland (2023c).
A second price should be mentioned.The theory starts by constructing operators associated with all accessible variables.Pure state vectors are then only introduced as eigenvectors of some physically meaningful operator.This seems to impose a limitation on the superposition principle.This can be discussed, and should be discussed, but it should be remarked that this theory also includes some entangled state vectors (Helland, 2024b).On the good side, this version of the quantum theory leads to a simple understanding of so-called quantum paradoxes, like Schrödinger's cat, the twoslit experiment and Wigner's friend, and there are links towards relativity theory and quantum field theory; see Helland (2023b) and Helland and Parthasarathy (2024).
10 Conclusions 'The discussion of quantum foundation and quantum interpretation will probably continue.I have presented my own views in several articles.This leads to a consistent theory, and a theory that also can be explained to outsiders.I see that as a great advantage.In particular, the theory can easily be explained in the discrete case, which has many applications, and is treated in very many textbooks.The continuous case can be approached by taking limits from a discrete construction; see again Helland (2021), but it is important also to have an independent basis for this case (Helland, 2024a,b).
The introduction of quantum probabilities requires extra assumptions as described above.One of these assumptions, the likelihood principle, is related to statistical theory.This opens for a possible communication between statisticians and quantum physicists, a communication that up to now has been very scarce.With the rapid progress now of artificial intelligence, which is closely connected to statistics (Hastie et al., 2017), and the fact that there now are appearing several articles connecting artificial intelligence to quantum mechanics, see Bharti et al. (2020), Dunjko and Briegel (2019), Dunjko et al. (2016), Rupp (2015), and Zhu et al. (2023) , such communication should be treated as being of some importance.It has been a major goal of my approach.
The other assumption behind the Born rule relates to the ideals of the relevant agent.These are assumed to be of a kind that can be modeled by an abstract or concrete higher being, considered by the agent to be perfectly rational.In Helland (2023a), such ideals are discussed in connection to decision processes.
My theory can be taken as a basis for reviewing discussions within the quantum community.In the present article I have considered the recent discussion between Khrennikov (2024) and a couple of QBists.From my point of view, I have stated some arguments against a pure QBist interpretation as the only solution.My own approach leads to a general epistimic interpretation, containing QBism as a special case.
For many years there has been discussions in the statistical society between Bayesians and frequentists, but these discussions have now calmed down.The best statisticians, see e.g.Efron (2015), have used tools from both schools.The discussions have also involved a third school, fiducial inference, founded by Fisher (1930Fisher ( , 1956)), then discarded by most statisticians, but in recent years revived by statisticians like Hannig et al. (2016) and Taraldsen and Lindquist (2024).A forum for such discussions have been international BFF conferences (Bayesian, Frequentist, Fiducial, also informally named Best Friends Forever).
In some future one could hope for extended discussions of this kind on the foundation of empirical science, involving scientists from several cultures, including quantum physicists and statisticians.This will require a common language, a language that has been sought for in my recent articles.
(1) p(z b |τ, θ b = u b j ) = p(z c |τ, θ c = u c j ) for each j.(2) Introduce the class of indices C i such that p(z b |τ, θ b = u b k ) = p(z b |τ, θ b = u b l ) whenever k, l ∈ C i , and these likelihoods are different when k and l belong to different C i -classes, similarly D i for p(z c |τ, θ c = u c k ).Then we have ∑ k∈C i |b; k b; k| = ∑ k∈D i |c; k c; k| for all i.