How (Not) to Understand Weak Measurements of Velocities

To-date, the most elaborated attempt to complete quantum mechanics by the addition of hidden variables is the de Broglie-Bohm (pilot wave) theory (dBBT). It endows particles with definite positions at all times. Their evolution is governed by a deterministic dynamics. By construction, however, the individual particle trajectories generically defy detectability in principle. Of late, this lore might seem to have been called into question in light of so-called weak measurements. Due to their characteristic weak coupling between the measurement device and the system under study, they permit the experimental probing of quantum systems without essentially disturbing them. It's natural therefore to think that weak measurements of velocity in particular offer to actually observe the particle trajectories. If true, such a claim would not only experimentally demonstrate the incompleteness of quantum mechanics: it would provide support of dBBT in its standard form, singling it out from an infinitude of empirically equivalent alternative choices for the particle dynamics. Here we examine this possibility. Our result is deflationary: weak velocity measurements constitute no new arguments, let alone empirical evidence, in favour of standard dBBT; One mustn't na\"ively identify weak and actual positions. Weak velocity measurements admit of a straightforward standard quantum mechanical interpretation, independent of any commitment to particle trajectories and velocities. This is revealed by a careful reconstruction of the physical arguments on which the description of weak velocity measurements rests. It turns out that for weak velocity measurements to be reliable, one must already presuppose dBBT in its standard form: in this sense, they can provide no new argument, empirical or otherwise, for dBBT and its standard guidance equation.


Introduction
Since its inception, Quantum Mechanics (QM) has faced three major interpretative conundrums (see e.g.Lewis 2016;Myrvold 2018).The first is the so-called Measurement Problem (see e.g.Maudlin 1995): how are we to make sense of the superpositions of states which the formalism of QM (if assumed to be universally valid) appears to attribute to objects?The second pertains to the interpretation of Heisenberg's uncertainty relations (see e.g.Hilgevoord and Uffink 2016): do they circumscribe an absolute limit of simultaneous knowledge of, say, a particle's momentum and position?Or does it reflect an ontological indeterminacy?Finally, how should one understand entanglement (see e.g.Ney and Albert 2013) -the fact that generically, composite systems appear to defy an unambiguous description of their individual constituent parts?
These three puzzles culminate in the so-called EPR paradox (see e.g.Redhead 1987, Chapter 3 or Fine 2017).Suppose one widely separates the partners of an entangled pair of particles.They can then no longer interact.Hence we may, according to Einstein, Podolsky and Rosen, "without in any way disturbing the system" perform (and expect a well-defined outcome of) a position measurement on one partner, and a simultaneous momentum measurement on the other (Einstein et al., 1935, p. 777).Prima facie, it looks as if thereby we can bypass the uncertainty relations.This raises the question whether QM in its current form is complete: does every element of physical reality have a counterpart in the description of the QM formalism?
Famously, Einstein thought otherwise (see e.g.Lehner 2014).He was "[...] firmly convinced that the essentially statistical character of contemporary quantum theory is solely to be ascribed to the fact that this [theory] operates with an incomplete description of physical systems" (Einstein, 1949, p. 666).To-date, the most elaborated attempt to thus "complete" (cf.Goldstein 2017, Section 4) QM dates back to Bohm (1952a,b) -"Bohmian Mechanics" or, in recognition of de Broglie's earlier proposal, "de Broglie-Bohm theory" (dBBT). 1 (We'll stick to the latter term throughout.) It supplements the QM formalism by a deterministic, but manifestly non-local dynamics for particles.At all times, they occupy determinate positions, evolving continuously in time.Only the particles' initial exact distribution (and the initial wave function) is unknown.Due to this fact, QM emerges from dBBT in a manner "approximately analogous [...] to the statistical mechanics within the framework of classical mechanics" -as Einstein (ibid) had hoped.
But dBBT isn't free of problems.From its early days on, a principal objection to it 2 targets the unobservability of its particle dynamics.By construction, in dBBT the individual particle trajectories seem to be undetectable in principle.Only their statistical averages are observable.They coincide with the standard quantum mechanical predictions.Thereby, standard dBBT achieves empirical equivalence with QM. 3  Recently, this lore seems to have been called into question in light of a novel type of measurements -so-called weak measurements (Aharonov et al., 1988).These denote setups in which some observable is measured, without significantly disturbing the state.
Inspired by Wiseman (2007), eminent advocates of standard dBBT seem to have touted such weak measurements as a means of actually observing individual trajectories in standard dBBT (e.g.Goldstein 2017, Section 4).Moreover, they point to already performed experiments (e.g.Kocsis et al. 2011;Mahler et al. 2016) that appear to corroborate dBBT's predictions and claim to show the particle trajectories.
The present paper will critically examine those claims.Should they hold up to scrutiny, they would not only establish the incompleteness of QM.Almost more spectacularly, they would also furnish the remedy: they would vindicate dBBT in its standard form.
Those claims, we contend, are mistaken: weak measurements constitute no new arguments, let alone empirical evidence in favour of dBBT's guidance equation.To show this, we'll carefully reconstruct the physical arguments on which the description of weak measurement rests.dBBT is entirely dispensable for a coherent treatment and interpretation of weak measurements; they receive a natural interpretation within standard QM as observational manifestations of the gradient of the wave function's phase.For weak velocity measurements to disclose the particles' actual velocities, one must not only presuppose the prior existence of deterministic (and differentiable) trajectories, but also the specific form of standard dBBT's particle dynamics.We contest Dürr et al.'s suggestion of a legitimate sense in which weak velocity measurements allow a genuine measurement of particle trajectories.
We'll proceed as follows.§2 will revisit de Broglie-Bohm theory -its basics ( §2.1), and one of its principal challenges, its empirical underdetermination ( §2.2).In §3, we'll turn to weak velocity values.§3.1 will introduce Wiseman's measurement protocol for so-called weak velocity measurements.We'll subsequently illustrate it in the double-slit experiment ( §3.2).Our main analysis of the significance of weak measurements for velocities in de Broglie-Bohm theory will form the subject of §4.We'll first elaborate when actual velocities and weak ones (as ascertained in Wiseman's measurement protocol) coincide ( §4.1).This will enable a critical evaluation both of Dürr et al.'s claim that weak velocity measurements are in some sense genuine ( §4.2), and as well as the idea that they provide non-empirical support for standard dBBT ( §4.3).Our findings will be summarised in §5.A mathematical appendix ( §6) contains a concise review of weak interactions within the von Neumann measurement scheme ( §6.1), as well as of post-selection and the two-vector-formalism ( §6.2).

Basics
dBBT is best conceived of as an example of what Popper (1967) dubbed a "quantum theory without observer" (cf.Goldstein 1998;Allori et al. 2008, esp. Section 8): it aspires to provide an understanding of quantum phenomena without fundamental recourse to non-objective (i.e.subjective or epistemic) notions.Such endeavours grew out of the dissatisfaction with influential presentations of QM, notably by von Neumann, Heisenberg and (common readings of) Bohr (see e.g.Jammer 1974;Scheibe 2006, Ch. VIII, IX;Cushing 1996).
In its non-relativistic form, dBBT is a theory about (massive, charged, etc.4 ) particles.At all times, they occupy definite positions.Across time, the particles follow deterministic trajectories.Like a "pilot wave", the quantum mechanical wave function guides them along those paths.Assuming a particular initial distribution of the particles, one recovers the empirical content of QM.
More precisely, for an N -particle system, dBBT can be taken to consist of three postulates.(We closely follow Dürr and Teufel (2009), to whom we refer for all details.) (SEQ) The wave function Ψ : R 3N ×R → C satisfies the standard N -particle Schrödinger Equation (SEQ) in the position representation: (GEQ) The continuous evolution of the i-th particle's position Q i (t) : R → R 3 in 3-dimensional Euclidean space is generated by the flow of the velocity field5 That is, the particle position Q i obeys the so-called guidance equation (GEQ) For all relevant types of potentials, unique solutions (up to sets of initial conditions of measure zero) have been shown to exist (Teufel and Tumulka, 2005).Notice that v Ψ i depends on all particle positions simultaneously.This is the source of dBBT's manifest action-at-a-distance in the form of an instantaneous non-locality (see e.g.Goldstein 2017, Section 13).
(QEH) The wave function induces a natural (and, under suitable assumptions, unique, see Goldstein and Struyve (2007)) measure on configuration space, the socalled Born measure: It quantifies which (measurable) sets of particle configurations Q ⊆ R 3N count as large ("typical").That is: for some small ε > 0 (see Maudlin 2011; Dürr and Struyve 2019; Lazarovici  and Reichert 2015 for details; cf.Frigg 2009, 2011). 6This definition of typicality respects a generalised sense of time-independence.A universe typical in this sense is said to be in quantum equilibrium (see Dürr et al. 1992 for further details).The continuity equation for |Ψ| 2 obtained from the Schrödinger Equation implies that a system is in quantum equilibrium at all times, if and only if in equilibrium at some point in time.This is called the Quantum Equilibrium Hypothesis (QEH).
Consider now a de Broglie-Bohmian N -particle universe, satisfying these three axioms.An M -particle subsystem is said to possess an "effective" wave function Φ, if the universal wave function (i.e. the wave function of the universe) Ψ : X × Y → C, with X and Y denoting the configuration space of the subsystem and its environment, respectively, can be decomposed as Here Φ and Ψ ⊥ have macroscopically disjoint y-support and Y ⊆ supp(Φ).That is, the configurations in which Φ and Ψ ⊥ vanish are macroscopically distinct (e.g.correspond to distinct pointer positions).For negligible interaction with their environment, the effective wave function ψ of subsystems can be shown to satisfy the Schrödinger Equation itself.

Underdetermination
Empirically, the guidance equation 2.3 isn't the only option.
More precisely, for empirical equivalence with QM, the specific guidance equation 2.3 isn't necessary.Infinitely many different choices (2.5) are equally possible for otherwise arbitrary vector fields j whose divergence vanishes, ∇ • j = 0.They yield coherent alternative dynamics with distinct particle trajectories, whilst leaving the predictive-statistical content unaltered (Deotto and Ghirardi, 1998).
One needn't even restrict oneself to a deterministic dynamics (an option expressly countenanced by e.g.Dürr and Teufel 2009, Chapter 1.2): a stochastic dynamics, with |Ψ| −2 j corresponding to a suitable random variable can also be introduced.As a result the particles would perform random walks, with the r.h.s. of the integral equation containing a diffusion term.A proposal of this type is Nelson Stochastics (see e.g.Goldstein 1987;Bacciagaluppi 2005).In short: by construction, dBBT's individual particle trajectories are observationally inaccessible.
In consequence, dBBT is vastly underdetermined by empirical data: all versions of dBBT with guidance equations of the type 2.5 are experimentally indistinguishable.Yet, the worlds described by them clearly differ.(We illustrate this in Figure 1.) This underdetermination poses a challenge to a realist understanding of dBBT (cf.for example Stanford 2017, Chapter 3.2) For the purposes of this paper, we'll confine the class of considered choices to the family of de Broglie-Bohmian-like theories (cf.Dürr and Ehmann 2020, Section 3.4) -i.e.particle theories within the Primitive Ontology framework (see, e.g.Allori et al. 2008;Allori 2013Allori , 2015)).It encompasses e.g. the "identity-based Bohmian Mechanics" (Goldstein et al., 2005) or "Newtonian QM" (Sebens, 2015).Let's even further whittle down the list of candidate theories to deterministic variants of dBBT with differentiable paths, i.e. to variants of dBBT that differ only with respect to their vector field of the type in Equation 2.5.Still, the underdetermination persists; its severity is scarcely diminished: how to justify the particular choice for the standard guidance equation amongst the uncountably infinite alternatives?
An argument frequently cited in response is a result by Dürr et al. (1992, p. 852): "The standard guidance equation is the simplest first-order equation that respects Galilei covariance and time-reversal invariance."But this is not decisive.First, individually neither desideratum of Dürr et al.'s theorem seems compelling -unless one is already guided by intuitions, shaped by either classical physics or by standard dBBT itself.In particular, one may reject the ab initio requirement of Galilei covariance as implausible: Galilei covariance is the symmetry group of Classical Mechanics. 7hy impose it on a more fundamental theory -dBBT -which is supposed to supersede Classical Mechanics?8 Secondly, Dürr et al.'s argument rests on an assumption about how the Galilei group acts on the wave function.As Skow (2010) has argued, such an assumption is essentially unwarranted.
Thirdly, let's grant that a satisfactory answer can be given to the preceding two questions.Dürr et al.'s argument pivotally turns on mathematical simplicity.We confess, we'd be hard-pressed to pinpoint what mathematical (rather than, say, ontological) simplicity precisely means.Whatever it may be, as a super-empirical criterion, it may well be felt a dubious indicator of truth (see e.g.Van Fraassen 1980, Chapter 4.4;Norton 2000, Norton 2018, Chapter 5-7;Ivanova 2014Ivanova , 2020)).At best we are inclined to regard it as a pragmatic criterion, at worst an aesthetic one for theory acceptance.Can a realist legitimately invoke it to argue that one theory is more likely to be true than an otherwise equally good alternative?
This context -underdetermination -renders weak value measurements particularly interesting.By (prima facie) allowing measurements of individual particle trajectories, they appear to directly overcome dBBT's underdetermination.But wouldn't that contradict the empirical inaccessibility of the trajectories?Let us see.

Weak velocity values
This section will offer a concise review of so-called weak values.We'll first outline how they are harnessed in Wiseman's measurement protocol for weak velocity measurements ( §3.1).An application to the double-slit experiment will further illustrate the salient points ( §3.2).This will pave the way for our subsequent discussion in ( §4).

Wiseman's measurement protocol for weak velocity measurements
Following Aharonov et al. 1988, weak measurements are measurement processes (modelled via the von Neumann scheme, see §6.1) in which the interaction between the measurement apparatus ("pointer device") and the particle ("system") is weak: it disturbs the wave function only slightly.As a result, one can extract still further information about the particles (say, regarding their initial momenta) via a subsequent ordinary "strong" (or projective) measurement (say, regarding their positions).
More precisely: after a weak interaction (say, at t = 0), the pointer states aren't unambiguously correlated with eigenstates of the system under investigation.In contradistinction to strong measurements, the system doesn't (effectively) "collapse" onto eigenstates; the particles can't be (say) located very precisely in a single run of an experiment.This apparent shortcoming is compensated for when combined with a strong measurement a tiny bit after the weak interaction: the experimenter is then able not only to ascertain the individual particle's precise location (via the strong measurement); for a sufficiently large ensemble of identically prepared particles with initial state ψ in (viz.Gaussian wave packets with a large spread), she can also gain statistical access to the probability amplitude of all subensembles whose final states -the so-called "post-selected" state -have been detected (in the strong measurement) to be ψ f in : This quantity is called the "weak position value" (for the position operator x).(The concept is straightforwardly applied also to other operators, mutatis mutandis.)It can be shown (see 6.2) that after many runs, the pointer's average position will have shifted by ⟨x⟩ w .Specifically, if we characterise the final/post-selected state via position eigenstates |x⟩, determined in a strong position measurement and unitary evolution of the initial state, we obtain where Û (τ ) denotes the unitary time evolution operator during the time interval [0; τ ].Following Wiseman 2007, it's suggestive to construe ⟨x(τ )⟩ w as the mean displacement of particles whose position was found (in a strong position measurement at t = τ ) to be at x. From this displacement, a natural definition of a velocity field ensues: Note that all three quantities entering this velocity field -τ , x and ⟨x(τ )⟩ w -are experimentally accessible.In this sense, the velocity field is "defined operationally" (Wiseman).In what follows, we'll refer to the application of this measurement scheme -a strong position measurement in short succession upon a particle's weak interaction with the pointer -for the associated "operationally defined" velocity field as "Wiseman's measurement protocol for weak velocity measurements", or simply "weak velocity measurements' '.
For a better grasp of its salient points, let's now spell out such weak velocity measurements in the context of the double-slit experiment.In §4, it will prove useful to occasionally refer back to this concrete setup.

Weak measurements in the double-slit experiment
Consider the standard double-slit experiment with, say, electrons, hitting a screen.It enables a detection of the electrons' positions.This constitutes a strong position measurement.Accordingly, we'll dub this screen the strong screen.Between the strong screen and the two slits from which the particles emerge, let a weak measurement of position be performed.Let this be called the weak screen.The two screens can be moved to perform measurements at various distances from the double-slit.Suppose that it takes the particles some time τ > 0 to travel from the weak to the strong screen.
After passing through the slits, the electron will be described by the wave function |ψ⟩ = ψ(x, t) |x⟩ dx.This leads to the familiar double-slit interference fringes.We assume that the weak screen, i.e. the pointer variable, is in a Gaussian ready state with width σ, peaked around some initial position.After the particles have interacted with the measurement device (at time t = 0), the composite wave function |Ψ(0)⟩ of particle-cum-weak screen is (3.4) Here, |φ⟩ denotes the wave function of the weak screen, and y its free variable (e.g. the position of some pointer device).The wave function then evolves unitarily for some time τ , according to the particle Hamiltonian Ĥ: After weakly interacting, the particle and pointer are entangled.Hence, only the composite wave function -not the reduced state of the pointer -evolves unitarily during time τ .The unitary operator Û (τ ) : e − i ℏ Ĥτ only acts on x (not on y).Due to this evolution, the post-selected position x on the strong screen will in general differ from the weak value ⟨x w ⟩, obtained from averaging the conditional distribution of the pointer of the weak screen.The procedure is depicted in Figure 3. On both screens the wave function is slightly washed out.It evidently differs from an undisturbed state (i.e. in the absence of the weak screen).To obtain the two position values -the weak and the strong one -strong measurements are now performed both at the weak and the strong screen (i.e. on the pointer variable and on the target system).For each position outcome x at the strong screen, let's select a subensemble.For any such subensemble, we then read out the statistical distribution of the position measurement outcomes at the weak screen.
We have thus assembled all three observable quantities needed for Wiseman's operationally defined velocity 3.1: the time τ that elapsed between the two measurements, the positions x (obtained as values at the strong screen), and the average value of all positions of the subensemble, associated with (i.e.post-selected for) x.This may now be done for different positions x on the strong screen.To that end, move the screens to different locations; there repeat the measurements.
With this method one can eventually map the velocity field, for a sufficiently large number of measurements.We'll defer the discussion of how to construe this result to the next section.For now, let's rest content with stating it as a calculational fact, suspending any further conclusions.
Kocsis et al. have indeed performed an experiment of a similar kind, using weak measurements of momentum.Their result, depicted in Figure 2, qualitatively reproduces the trajectories of standard dBBT.(We'll return to this experiment and how to understand it in §4.Here, we mention it primarily to convey an impression of the qualitative features of Wiseman's operational velocity, when experimentally realised.)Moreover, it can be shown (cf.§6.3) that weak velocity measurements are measurements of the gradient of the phase of the wave function.Thus, they coincide with definition of standard Bohmian velocities in the guidance equation where S is the gradient of the phase of the wave function, ψ(x) = |ψ|e iS(x) .Notice that for this, only the standard quantum-mechanical formalism has been utilised.Therefore, we may conclude that -based solely on standard QMweak velocity measurements permit experimental access to the gradient of the wave function's phase.
Next, we'll ponder whether commitment to further, generic and supposedly mild interpretative choices (viz.the adoption of a de Broglie-Bohmian framework) might grant us a peep into an allegedly deeper reality, veiled under this standard quantum mechanical interpretation.

Why weak velocity measurements do not measure velocities
Suggestive as these results are, we will now show that such measurements could not provide direct experimental evidence displaying the shape of particle trajectories, even if it is assumed that some deterministic particle trajectories exist.They cannot, that is, go any way to experimentally resolving the underdetermination in putative dBBT guidance equations mentioned previously.First ( §4.1), we'll analyse the relation between Wiseman's operationally defined velocity Equation 3.3 and the particle's actual velocity.In particular, we'll show that a strong assumption is required that would render it question-begging to employ weak velocity measurements in order to infer the particles' actual velocities.This analysis will subsequently allow us to critically evaluate two stances regarding the significance of weak velocity values for dBBT -Dürr et al.'s portrayal of weak velocity measurements as allegedly "genuine" measurements ( §4.2), and a view of weak velocity measurements as non-empirical support of standard dBBT ( §4.3).

When do weak and actual velocities coincide?
Here, we'll address the question of whether -or rather: when -weak velocities coincide with the particles' actual velocities, assuming that they exist.That is, we'll explicate the conditions under which weak velocity measurements count as reliable.That, we'll argue, turns out to presuppose standard dBBT.
In the following, x and y will denote the position variables of the individual particles to be measured, and the measurement apparatus, respectively.For simplicity, we'll work in one dimension only.Let the particles be prepared in the initial state Furthermore, let the pointer device (i.e. the weak screen of the double-slit version of weak measurements in §3.2) be prepared in the initial state given by a Gaussian with large spread σ, centred around 0: where N is a suitable normalization factor.Together, the particle and the pointer form the compound system with the joint initial state Now consider the first -the weak -measurement process.It consists of an interaction between the particle and the pointer.Upon completion of this process (say at t = 0), the compound system ends up in the entangled state The probability distribution for the pointer variable y, given some position X of the particle, is therefore: This probability density determines the expectation value That is, the mean value of the pointer distribution, conditional on the particle occupying position X, coincides with that position.This underwrites the following counterfactual: (C 0 ) If one were to perform an ordinary (strong) position measurement on the particles immediately after the weak interaction, the expectation value would yield the actual position of the particle.
Via E(y|x = X), the particle position thus is empirically accessible through the statistics of large ensembles of identically prepared particles from which we cull post-selected outcomes x = X.
This thought is further exploited in the final steps of Wiseman's procedure.In the foregoing considerations, the strong measurement was performed immediately upon the weak one.Instead, we'll now allow for a small delay.That is, after the particle and the pointer have (weakly) interacted, the total system evolves freely for some small, but finite time τ .Its state then is where Ĥ0 denotes the system's free Hamiltonian.Eventually, we perform a strong measurement of the particle's position X τ at t = τ .(The strong coupling between the measurement device and the particle enables a precise detection of the latter's actual position.)We thus get the expectation value for the pointer variable, conditional on the particle occupying the position X τ at t = τ : Through the statistics of a sub-ensemble of particles whose strong position measurements yielded X τ , this expectation value is empirically accessible.
In analogy to Equation 4.6, let's define the position: (4.9) Combined with the particle position X τ , obtained from the strong measurement at t = τ , it thus appears as if we have access to particle positions at two successive moments.Using Equation 4.9, the associated displacement is Let's grant one can make it plausible that the particles' trajectories are differentiable.Then, the displacement (Equation 4.10) gives rise to the velocity field Note that all terms on the r.h.s. of Equation 4.11 are observable.(Hence, presumably, Wiseman's labelling 4.11 as an "operational definition".)In conclusion, it seems, via the statistics of an experimental setup implementing Wiseman's procedure, we are able to empirically probe this velocity field.
But what does this velocity field signify?It's tempting to identify it with the particles' actual velocities.That is, should this be true, the flow of Equation 4.11 generates the particles' trajectories (assumed to be deterministic and differentiable).Is this identification justified?
By defining an X 0 def = E(y|x = X τ ) via Equation 4.9, our notation certainly suggests so.Let's indeed assume that this is correct.We'll dub this the "Correspondence Assumption" (COR).That is, suppose that the actual particle position X τ at t = τ is connected with the earlier particle position x(0) = X 0 = T−τ X τ at t = 0, where T−τ denotes the shift operator that backwards-evolves particle positions by τ .(In other words: for arbitrary initial positions, Tτ supplies the full trajectory.)Then, according to (COR), the expectation value (4.9) corresponds to the particles' position at t = 0.For post-selection of subensembles with x(τ ) = X τ , (COR) thus takes the form (in the limit of large spread σ): (4.12) In other words, (COR) implies the counterfactual: (C t ) If one were to perform a strong position measurement at t = τ (with the weak interaction taking place at t = 0), yielding the particles' position at x(τ ) = X τ , the weak value would be directly correlated with the particles' earlier position T−τ X τ .That is, upon a strong measurement at t = τ , the expectation value would reveal the particles' true positions: On (COR), the weak value thus gives the particle's actual position at the weak screen: the expectation value on the l.h.s. is reliably correlated with the particle's earlier positions.But most importantly, this is an if and only if condition: If (COR) is satisfied, then we recover the actual position, but if it is not, we don't.As a result one ought to have to assume that (COR) is true for weak position measurements to yield actual particle positions.
Thereby, any set of data compatible with QM appears to corroborate standard dBBT: given (COR), weak velocity measurements yield standard dBBT's velocity field.It thus seems as if standard dBBT's empirical underdetermination has been overcome.
Such an apparent possibility of confirming standard dBBT would be remarkable.It crucially hinges, however, on the soundness of (COR).Why believe that it's true?We'll first refute a prima facie cogent argument for (COR).We'll then give a more general argument why (COR) is generically false.This will eventually be illustrated with a simple counterexample.Prima facie, (COR) looks like a plausible extrapolation of a strong measurement immediately after the weak interaction (i.e. at t = 0).This idea may be developed in three steps.First, (COR) indeed holds in the limit τ → 0 + .Next, in a deterministic world, it would seem that where κ ∈ R denotes a position.By appeal to C 0 , this would then yield as desired.
At first blush, this argument looks watertight.Its first step ensues from the standard rules of QM (see Equation 4.6).Its third step, too, seems innocuous: only a few lines earlier, we derived (C 0 ) from the standard QM formalism.Let's therefore throw a closer glance at the second step.It's convenient to cast it in terms of the probability densities, associated with the expectation values: (4.16) Prima facie, given determinism, this identity stands to reason: all else being equal, the probability of craving a biscuit around 5 pm, given our momentary glucose levels, isn't altered by conditioning on our glucose levels a few minutes earlier (provided that they evolve deterministically).Determinism ensures that those physiological states (and only they) evolve into the physiological states, considered initially.By the same token, one might think, the events {(x(τ ), y) ∈ R×R : x(τ ) = Tτ κ} and {(x(0), y) ∈ R × R : x(0) = κ} refer to the same events of our probability space (i.e. the same diachronically identical configurations, as it were, merely pointed to via (inessential) different time indices) and therefore are assigned the same probability measure.
In classical Statistical Mechanics, one may take this for granted.In a quantum context, entanglement complicates the situation: it compromises the ascription of probability measures to certain events.One must heed the time with respect to which the assigned probability measure is defined.This is the case with weak velocity measurements.Recall that in Wiseman's measurement protocol, the strong measurement is only performed at t = τ .This precludes defining the second term in 4.17!That is, no strong measurement is performed -and no attendant "effective collapse" of the wave function occurs -at an earlier time (viz.at t = 0).As a result, at the time of the weak interaction (t = 0), the wave function of the pointer and that of particles are entangled.That means, however, that we can't naïvely assign the event of any particular particle position at t = 0 an objective, individual probability measure 9 ; that would require post-selection at t = 0.Only the entangled pointer-cum-particle system as whole has a physically grounded, objective probability measure.
This follows from the fact that P(x(0) = κ) is obtained from the pointer-cumparticle system's reduced density matrix (i.e. by partially tracing out the pointer's degrees of freedom).But this transition from the density matrix of a pure state to the reduced density matrix of an "improper mixture" (d'Espagnat, 2018, Chapter 7) lacks objective-physical justification (see, e.g., Mittelstaedt 2004, Chapter 3-4).Contrast that with the situation of P(y&x(τ )=Xτ ) P(x(τ )=Xτ ) : this is well-defined via post-selection.That is, due to the "effective collapse" (see, e.g., Dürr and Teufel 2009, Chapter 9.2), induced by the strong measurement at t = τ , the event x(τ ) = X τ can be assigned a well-defined probability measure.In d'Espagnat's terminology, we are dealing with a "proper mixture".In short: Owing to the pointer's entanglement with the particle, determinism doesn't imply E(y|x(τ ) = Tτ κ) = E(y|x(0) = κ).The initially auspicious argument for (COR) therefore fails.
From its failure, we gain also a wider-reaching insight: unless (at t = 0) the strong measurement is actually performed (unlike in Wiseman's measurement protocol), the conditional probabilities P(y|x(0) = κ) (or equivalently: their associated expectation values) aren't objectively defined -if one adopts their usual definition in terms of post-selection.Strictly speaking, the unrealised measurement renders P(y|x(0) = κ), thus defined, meaningless. 10 No independent reasons have been given so far for believing that (COR) is true, though.(Conversely, the lack of independent reasons for standard dBBT (rather 9 In this regard, one should bear in mind that, on the mainstream view of dBBT (espoused by DGZ), probabilistic statements about subsystems (construed in terms of typicality) should be derived from dBBT's axioms of §2 (cf.Teufel and Tumulka 2005, Chapter 4, 9;Lazarovici and Reichert 2015;Lazarovici et al. 2018, Section 3).
In the present context in particular, we can't simply assign the particles a probability measure -that of the reduced density matrix -per stipulation: we must deduce it from the probability measure of the composite pointer-device system -using only the other axioms.The quantum operation of a partial trace, implementing the transition to the reduced density matrix, transcends those fundamental axioms (see e.g.Dürr et al. (2004, Section 6)). 10 In this proviso, one might descry a possible loophole.Why not define the prerequisite probability measures P(y&x(0) = κ) and P(y|x(0) = κ) indirectly -via their respective later states?That is, instead of the direct definition via post-selection at t = 0, one might stipulate the following probability measures (cf.The conditional probability densities then are definitionally identical with the weak value: than any of its non-standard variants), especially in light of its empirical underdetermination, was our major motivation for applying weak velocities in the context of de Broglie-Bohmian theories.)Consequently, counterexamples to (COR) abound -and are perfectly familiar: any non-standard variant of dBBT of the type of Equation 2.5 (i.e. with non-vanishing, divergence-free vector field j).In them, the particle's trajectory generically crosses the weak screen at a point distinct from what the weak velocity measurements would make us believe.Figure 3 illustrates this.Nothing compels us -even if sympathetic to the overall de Broglie-Bohmian framework -to regard the outcome as truly representing the actual position of the particle at time t.It's just unknown: it could have traversed any location within the support of the Gaussian wave function, centred around the weak value.Still, the operationally defined velocity (obtained from averaging) wouldn't change: as long as the Born Rule and the validity of the Schrödinger Equation -the only prerequisites for deriving the result Equation 6.14 -hold, its value remains the same.(In this sense, any guidance equation of the type of Equation 2.5, is compatible with Wiseman's operationally defined velocity) Absent an independent argument for the correspondence between weakly measured and actual positions (i.e.COR), it remains unclear what -if anything -Wiseman's operational velocity 3.3 signifies ontologically.
By naïvely generalising C 0 to C t , one neglects the relevance of time in the present setup: it matters both when the weak measurement interaction occurs, and when one post-selects.If both happen at the same time, the weak position value indeed corresponds to the particle's actual position at time t = 0. If, however, some time τ elapses between interaction and post-selection, generically this is no longer the case.
It's instructive to rephrase this result: the assumption C t , necessary for the correspondence of weak and actual velocities, is in fact equivalent -in virtue solely of the quantum mechanical formalism and the supposition of deterministic differentiable particle trajectories -to standard dBBT.(First, suppose that C t is true.Then, the weak velocity measurement yields the actual particle velocities.Wise- With this definition, the calculation in 4.10 now uniquely determines the shift operator to be that of standard dBBT.That is: given the indirect definitions 4.18 and 4.19, for this shift operatorand only for it -(COR) holds.
It may now look, as though Definitions 4.18 and 4.19 are indeed the most natural alternatives to the (unavailable) direct definition of the conditional probabilities.Still, nothing invariably forces us to resort to any such alternative indirect definition: one may just decline to content oneself with anything but the bona fide direct ones.In light of the comments on entanglement, such an option is in fact quite plausible: one just ought to swallow the fact that the conditional probability fails to be definable.
If one opts for this stance, one will dismiss the operational velocity 3.3 as an arbitrary (nonfactual) stipulation -rather than a representation of an objective, physical feature (viz.the particles' real velocity).By the same token, that it coincides with that of standard dBBT, thereby becomes a definitional artefact -denuded of factual content.
Suppose, however, that one accepts the alternative indirect definitions Equation 4.18 and 4.19.Then, it would still be true that -by the standard laws of QM -only one shift operator is consistent with this definition -the shift operator of standard dBBT.That is, in order to employ definitions to describe a deterministic world for which the standard QM formalism is empirically adequate, one must presuppose that standard dBBT (rather than one of its observationally equivalent alternatives) is true.Put differently: given the (operational) Definition 3.3 (and the empirical adequacy of QM), (COR) is true, if and only if standard dBBT is true.With regards to the definitions Equation 4.18 and 4.19, we can say: they can only be adequate, if standard dBBT is true.That is, they commit us to the presumption of the latter.But that is precisely what renders them problematic stipulations in the present context (cf.Norton 2004).The weak value is obtained from the distribution on the weak screen.When the velocity field is that of standard dBBT (j = 0), the actual position of the particle x(0) matches the weak value x w .For an alternative guidance equation (j ̸ = 0), it doesn't: the particle crosses the weak screen at a point x ′ (0), other than the weak value.This shows that depending on which guidance equation one chooses, the weak value needn't yield the actual position of the particle at time 0. man's operationally defined velocity 3.3 uniquely picks out a guidance equationthat of standard dBBT.Conversely, suppose standard dBBT to be true.A weak velocity measurement then discloses the actual particle velocities.Thus, C t holds true.) In conclusion: Here, we argued that a particle's weak velocity coincides with its actual velocity (provided one is wiling to attribute deterministic, differentiable paths to the particles), if and only if standard dBBT is true.But this coincidence is a sine qua non for deploying weak velocity measurements in support of standard dBBT.To attempt to do so -absent independent arguments for the reliability of weak velocity measurements -would one thus incur circularity.
This analysis permits us to evaluate two verdicts on the significance of weak velocity measurements for standard dBBT, found in the literature.Let's start with Dürr, Goldstein and Zanghì's claim that they enable genuine measurements.

Weak measurements as genuine measurements?
The foregoing analysis sheds light on a recent claim by Dürr, Goldstein, and Zanghì (2009).These authors (henceforth abbreviated as "DGZ") aver that Wiseman's measurement protocol for weak velocities allows "in a reasonable sense, a genuine measurement of velocity" in standard dBBT (ibid., pp.1025, DGZ's emphasis).Such a statement, we maintain, is misleading.DGZ themselves identify a condition as crucial for their claim.This identification, too, we deem the source of further potential confusion.The crucial -but in DGZ's account tacit -condition for weak velocity measurements to be reliable, as we saw in the previous sections, is (COR).But (COR) is equivalent to assuming the standard form of the guidance equation.The essential equivalence between (COR), and dBBT's standard guidance equation impinges upon the significance of weak measurements for dBBT: whether we regard weak velocity measurements as enabling genuine measurements of the particle's actual velocity is essentially equivalent to an antecedent commitment to standard dBBT.Pace DGZ, this curtails the significance of weak velocities as genuine.Yet, albeit misplaced in the context of weak measurements, DGZ's (misleadingly) identified crucial condition might open up a potentially illuminating perspective on standard dBBT.
DGZ assert that weak velocity measurements, as realised by Wiseman's measurement protocol, constitute real measurements in standard dBBT (cf.Dürr et al. 2004, Section 3.7).What is more, in his authoritative review of dBBT (Goldstein, 2017, Section 4) writes: "In fact, quite recently Kocsis et al. (2011) have used weak measurements to reconstruct the trajectories for single photons 'as they undergo two-slit interference, finding those predicted in the Bohm-de Broglie interpretation of quantum mechanics' " (cf.Dürr and Lazarovici 2018, pp. 142 for a similar statement).DGZ are aware of the fact that such a claim needs an additional assumption; they (as we'll show) misidentify that "crucial condition" (Dürr et al., 2009(Dürr et al., , p. 1026(Dürr et al., , 1030)).
Before adverting to DGZ's declaration of weak velocity measurements as genuine, a repudiation is apposite of the claim that such weak velocity measurements have actually been performed, in accordance with dBBT's predictions.
Figure 2 displays the weak velocities measurements ascertained in Kocsis et al.'s double-slit experiment.Indeed, they qualitatively tally with the trajectories of standard dBBT (cf., for instance, Figure 5.7 in Holland 1995, p. 184).Still, nothing immediately follows from that regarding the status of standard dBBT (see also Flack and Hiley 2014;Flack and Hiley 2016;Bricmont 2016, p. 181 Now to DGZ's main claim, as we understand it: that for a coherent application of weak velocity measurements to the Bohmian framework as reliable velocity measurements, one needs an assumption on the disturbance of actual velocities is needed.Only standard dBBT, so the story goes, has this feature.In turn it appears that weak velocity measurements can constitute genuine measurements of the particle's actual velocities only in standard dBBT. DGZ's considerations seem to start from the reliability of weak velocity measurements; they are predicated on (COR).DGZ (correctly) state that only standard dBBT is consistent with that.As the "crucial condition", responsible for that result, they identify a characteristic feature of standard dBBT's velocity field.
11 The treatment of photons within field-theoretic extensions of dBBT, capable of dealing with photons (or bosons, more generally), is a delicate matter, outside the present paper's ambit.We refer the interested reader to e.g.Holland 1995, Chapter 11 andDürr et al. 2012, Chapter 10 (also for further references).
12 Rather than the trajectories of individual photons, This interpretation (as we saw in Equation 6.14) has a counterpart in weak velocity measurements of the electrons of the present setup: per se, the weak velocity measurements only allow experimental access to the gradient of the wave function's phase.(This view on weak velocity values remains neutral, though, vis-à-vis any interpretation of the wave function.In particular, it's not necessarily committed to a statistical/ensemble interpretation.)(SPE) Whenever the particle-cum-pointer compound system has the form ψ(x)⊗ϕ(y− x), the particle's velocity field v (conceived of as a function of the compound system's wave function ψ ⊗ϕ) is supposed to depend only on the particle's wave function We'll dub this condition "separability of particle evolution" (SPE).It uniquely singles out standard dBBT (Dürr et al., 2009, Section 4).
DGZ's mathematical proof of this latter claim is beyond dispute.Their identification of (SPE) as a physically essential condition, however, is wrong-headed: (SPE) in fact plays no obvious role in the attempt to exploit weak velocity measurements for standard dBBT (see §3 and §4.1): nowhere is it invoked explicitly.Moreover, it remains elusive how (SPE) could enter that analysis: (SPE) is an exact equality, postulated to hold, whenever the composite particle-pointer wave function is factorisable.By contrast, DGZ's decisive equations (viz.( 21) and ( 22) in their paper) are only approximations, valid at t = τ .Their terms linear in τ don't take a factorisable form (nor do they vanish).Not even at t = 0 is the pointer-particle wave function factorisable.Hence, (SPE) doesn't seem to be applicable from the outset.To call (SPE) "crucial" -understood as directly responsible -for the reliability of weak velocity measurements in dBBT muddies the waters: it's solely in virtue of (SPE)'s essential equivalence with standard dBBT that (SPE) is relevant at all.That (SPE) singles out standard dBBT is salient of the (mathematical) form of the standard guidance equation: the latter is uniquely characterised by the factorisation of velocities at t = 0, as asserted by (SPE).
As a result, only because (COR) presupposes standard dBBT, and because the latter is essentially equivalent to (SPE) (recall our remark at the end of §4.1), is (SPE) "crucial" -in the sense of necessarily satisfied for (COR) to hold.In short: (COR), (SPE) and standard dBBT's guidance equation are essentially equivalent.That is: (COR) ∧ (DIF ) ∧ (DET ) ⇐⇒ dBBT's standard guidance equation where (DET) and (DIF) denote the assumption of deterministic and differentiable particle trajectories, respectively.
For weak velocity measurements to reveal the particles' actual trajectories (assuming determinism and differentiability, that is) -i.e. for weak velocity measurements to be reliable -(COR) not (SPE) -is the crucial condition that must be satisfied: without it, the counterfactual C t no longer holds (recall 4.1); the particle's later positions can't be inferred from the weak measurements.In particular given (COR)'s essential equivalence with standard dBBT or (SPE), this means that if weak velocity measurements are reliable, (SPE) needn't be assumed separately: it's implied by (COR).We thus reject DGZ's identification of (SPE) as the crucial condition for the reliability of weak measurements.Pace DGZ, one might hence baulk at calling them genuine in a sufficiently robust sense.Unless independent reasons for (SPE), (COR) or standard dBBT are forthcoming, weak velocity measurements lack epistemic significance for gauging the status of dBBT.The analysis of weak measurements in a de Broglie-Bohmian framework doesn't rely on (SPE).DGZ are right, however, in observing that if standard dBBT is true, weak measurements are reliable (i.e.weak position values and actual position values coincide).
DGZ's purely mathematical result -the equivalence of (SPE) and standard dBBT -hints at an alluring possibility (completely independently of weak measurements): it might serve as a prima facie interesting avenue for justifying (or, at least, motivating) standard dBBT.Underlying (SPE) seems to be the hunch that for particle-pointer systems with separable (factorisable) quantum states, the particle is supposed to be guided exclusively by the particle's wave function -not by that of the pointer.More generally, due to (SPE), whenever a quantum system is prepared as separable, the dynamics for the particles of one subsystem doesn't depend on the quantum state of other subsystem(s). 13 As a desideratum, (SPE) implements the expectation that the statistical independence at the quantum level percolates to the level of the behaviour (i.e.dynamics) of the hidden-variables: whenever the quantum states of a composite system A&B are independent, the dynamics of the particles constituting A shouldn't be affected by B's quantum state.One may deem this a plausible (albeit defeasible) heuristic principle for the construction of hidden-variable theories: it aligns the statistical independence of the known (empirically accessible) realm of the quantum formalism (for separable quantum states), and the independence of the unknown (empirically inaccessible) realm of the (putatively) more fundamental hidden-variables' dynamics.A dynamics respecting this alignment, one might feel, "naturally" explains the statistical independence at the coarse-grained quantum level.
On the other hand, one may well query the status of (SPE).The separability of quantum states is arguably related to their individuation (see e.g.Howard 1985Howard , 1989;;Brown 2005, Appendix B3): for composite systems with separable quantum states, subsystems have distinct quantum states.But why deem the individuation of quantum states -usually construed in this context as encoding statistical, coarsegrained properties to which our empirical knowledge seems limited -relevant for a constraint on the (putatively more) fundamental particle dynamics?Even if the particle and the pointer possess distinct (individual) quantum states, why should it follow that the particle's dynamics should depend only on the particle's wave function?
What might seem to suggest that is that SPE encodes a form of locality.(Standard (Bell-)locality forbids an action-at-a-distance.The kind of locality enshrined in (SPE) forbids that a particle's dynamics depends on the pointer's quantum state, even if the joint quantum state of the particle and the pointer is separable.)But standard non-locality is a manifest, distinctive feature of dBBT.The type of locality that (SPE) asserts doesn't restore standard locality.What then is it supposed to achieve?We leave the prospects of (SPE) as a potentially promising motivation for standard dBBT to future inquiry.
This section afforded two main lessons.Standard dBBT is mathematically uniquely characterised by a factorisation condition on the velocity field.We ar-13 This is somewhat reminiscent of so-called Preparation Independence, a key assumption in the Pusey-Barrett-Rudolph Theorem (see e.g.Leifer 2014, sect.7; specifically for the theorem in the context of standard dBBT, see Drezet 2014).Roughly speaking, Preparation Independence asserts that in hidden-variable theories, the so-called "ontic states" (i.e. the states represented by the two systems' "hidden" variables), should be statistically independent, if their joint quantum state is separable.For hidden-variable theories, this looks like a natural desideratum: it expresses how the separable systems' independence at the quantum level (cf.e.g.Howard 1989Howard , 1992) ) percolates to (i.e.constrains) the level of the more fundamental level of the hidden-variables (cf.Leifer 2014, sect.7.3).
There exists a critical difference between Preparation Independence and (SPE): the former makes claims about the hidden-variable states (in the present case: the particles' positions) and their statistically independent distribution; SPE, by contradistinction, makes a claim about their dynamics -its independence of other subsystems' quantum state (see main text).Consequently, one ought to expect justifications for either to differ.
In fact, Preparation Independence doesn't imply (SPE): all variants of dBBT respect the former (due to the Born Rule giving the distribution of the particles' actual positions, see e.g.Gao 2019) -but only standard dBBT satisfies (SPE).
gued that DGZ's identification of that condition as "crucial" for the reliability of weak measurements was misleading.2. Weak velocities coincide with the particle's actual velocities, if and only if standard dBBT is true.It thus remains questionable what argument (if any) weak velocity measurements provide in support of standard Bohmian trajectories or any other Bohmian theory.
On their own, weak velocity measurements thus don't provide any empirical support for standard dBBT.What about non-empirical inferential support, though?4.3 Non-empirical support for dBBT?
The main result of Wiseman's original paper can be read as a conditional claim: if one adopts his operationally defined velocity, and assumes deterministic, differential particle trajectories, the latter is uniquely determined as that of standard dBBT; on this reading, Wiseman remains neutral vis-à-vis this claim's premises -whether they are plausibly satisfied (or not).Stated thus, Wiseman's stance is impeccable.More exciting, however, would be the prospect of learning something novel about the status of standard dBBT from weak measurements (granting certain background assumptions).We'll now examine such a stronger interpretation of Wiseman's result: as a non-empirical justification of standard dBBT.
We flesh out three possible variants of such an argument. 14he starting point of the envisioned reasoning will be two tenets, explicitly endorsed by Wiseman: (1) One should construe the weak value in Wiseman's weak measurement protocol of §3.3 as the average velocity of a large ensemble of particles (Wiseman, 2007, sect. 3).
(2) Albeit not per se referring to individual particles, this statistical property provides a "justification for [standard dBBT's] law of motion [i.e. the standard guidance equation]" (ibid., p. 2).
According to tenet (1), the weak value, obtained in Wiseman's setup, by itself corresponds to a real property only of an ensemble of particles -rather than one naïvely ascribable to the individual particles: "Thus strictly the weak value [...] should be interpreted [...] only as the mean velocity in configuration space -this noise could be masking variations in the velocity between individual systems that have the same Bohmian configuration x at time t." (Wiseman 2007, p. 5).
One of the premises in the conditional claim is determinism.With that assumption in place, weak values within a de Broglie-Bohmian framework are plausibly interpreted as first and foremost statistical properties of ensembles, as asserted in (1): formally, weak values are (normalised) transition amplitudes (cf.Kastner 2017;pace, Vaidman 1996).Hence, the usual interpretation of probability amplitudes within dBBT as statistical (ensemble) properties applies (see e.g.Holland 1995,  Chapter 3.8; Bohm and Hiley 2006, Chapter 9.3). 15 Tenet (2) purports that in virtue of this statistical (ensemble) property dBBT's standard form "is preferred over all other on physical grounds" (Wiseman 2007, p. 12).That is, although other velocity fields generate the same (statisticallyempirically accessible) mean velocity, we ought to believe that the standard velocity field is true -rather than any of its alternatives: for Wiseman, (2) serves as a non-empirical rule of inference16 , "justifying [dBBT's] foundations" (ibid., p. 12).
As Wiseman reiterates, no experiment can discriminate between dBBT's standard velocity field and alternative choices.How then is the envisaged non-empirical justification supposed to work?What undergirds (2)?Three strategies (intimated to some extent by Wiseman and his commentators) spring to mind: (A) some variant of operationalism, (B) simplicity and/or parsimony, and (C) some variant of inference to the best explanation.
(A) The first invokes some form of operationalism in the spirit of Bridgman 1927.In its crudest form, it demands that all theoretical quantities be operationalisable: there must exist suitable measurement instructions for them.Yet, operationalism "[...] is nowadays commonly regarded as an extreme and outmoded position" (Chang 2009, also for a compilation of the arguments against operationalism).We'll therefore not discuss it further.
Perhaps an attenuated form fares better -one according to which (ceteris paribus) it's merely desirable that theoretical quantities be operationalisable.Wiseman seems to cherish the desideratum that "the [Bohmian particle] dynamics are deterministic, and that the velocity-field of the [hidden variable, i.e. the particle positions] should be naïvely observable [...]".But what would buttress such a desideratum?In particular, why believe that a theory that satisfies it is more likely to be true than empirically equivalent rival theories that don't?
(B) A second strategy (expressly disavowed by Wiseman) might turn on simplicity.Wiseman's operational definition, on this line of thought, should be regarded as distinguished -as particularly simple.Even if we set aside both Wiseman's concern that "simplicity is not a property that can be rigorously defined" (Wiseman, 2007, p. 9), and the problematic assumption that simplicity is truth-conducive, an appeal to simplicity isn't promising: simplicity and postulating that individual particle trajectories coincide with their statistical averages are unrelated.Although intuitively it may prima facie appear simple, if the individual trajectories are chosen so as to coincide with their statistical averages, the precise sense of simplicity turns out to be elusive: neither the theory's qualitative nor its qualitative parsimony are affected by that choice.That is, neither new or additional kinds/types of entities are introduced or eliminated in the theory's ontology, nor is the overall number of entities multiplied or reduced.
To appeal to parsimony would likewise be of no avail: neither in terms of quantitative (i.e. with respect to numbers of individual entities postulated) nor qualitative (i.e. with respect to numbers of types or kinds postulated) parsimony does such a postulate seem privileged.
(C) A third attempt to defend (2) might appeal to an Inference to the Best Explanation (IBE) (see e.g.Lipton 2003;Bartelborth 2012, Chapter 4): standard dBBT, on this view, provides the best explanation for the observational facts in Wiseman's protocol.
Again, let's grant that IBEs are generically justifiable (pace e.g.Van Fraassen 1980, Chapter 2;Van Fraassen 1989, Part II).Yet, in light of the foregoing comments on parsimony and simplicity, it's opaque in which sense standard dBBT could explain (or help us understand) the empirical phenomena in any better way than versions with non-standard velocity fields; both are equally capable of accommodating the empirical phenomena.
A variant of this appeal to an IBE17 , found in the literature, fixates on Wiseman's emphasis of the allegedly natural character of his proposal to operationally define velocities via weak values: "(Standard dBBT) delivers thus the most natural explanation of the experiments described" (Dürr and Lazarovici 2018, p. 145, our translation).
Three reasons militate also against this view.First, the intended notion of a natural explanation is to our minds vague.Hence, it's difficult to fathom its argumentative force.At best, it seems an aesthetic criterion.As such, its suitability for assessing theories is suspect (cf.Ivanova 2020Ivanova , 2017;;Hossenfelder 2018).
Secondly, in light of the highly unnatural consequences of the same reasoning in other contexts, one may well debate whether Wiseman's operationally defined velocity is indeed natural after all.Aharanov and Rohrlich (2005, p. 223) -presumably against the authors' intentions -summarise the generic "unnaturalness" of weak values: "weak values offer intuition about a quantum world that is freer than we imagined -a world in which particles travel faster than light, carry unbounded spin, and have negative kinetic energy." Thirdly, and quite generally, in §4.1 and §4.2 we have seen that in the present case the allegedly natural explanation would at any rate be deceitful: one mustn't naïvely take it for granted that they reveal the actual particle positions.Leavens (2005) draws attention to the fact that under certain experimental circumstances "[...] there is no possibility of the weak value [...] reliably corresponding in general, even on average, to a successfully post-selected particle being found near (the weak value) at time t = 0 when the impulsive weak position measurement begins and being found near (the post-selected value) an instant after it ends" (p.477).
The perils of naïve (i.e.literal) realism about weak position values are drastically demonstrated in the so-called Three-Box-Paradox (Aharonov and Vaidman 1991;Aharanov and Rohrlich 2005, Chapter 16.5; Maroney 2017).Imagine a particle and three boxes, labelled A,B, and C. Let the particle's initial state be where |A⟩ denotes the state in which the particle is in box A, and similarly, |B⟩ and |B⟩.For its final state, on which we'll post-select, choose Via the definition of weak values (see 6.2), one then obtains the resulting weak values for the projectors onto state i ∈ A, B, C, Pi := |i⟩ ⟨i|: If one were to believe that weak invariably reveal the real positions of particles, one would have to conclude that box C contains −1 particle!Within the ontology of dBBT (in any of its variants), this is an absurd conclusion: particles in dBBT either occupy a position or they don't; the respective position projectors take values only in {0, 1}.
Consequently, it's imperative that adherents of dBBT be wary of interpreting weak values as real position values without qualification.Our analyses in §4.1 and §4.2 underscore this: the reliability of weak position (or velocity) measurements is a non-trivial (and generically false) assumption.
In conclusion, our hopes were dashed that the velocity measurement in Wiseman's protocol supports dBBT in any robust, non-empirical sense.Neither the alleged merits of operationalisability per se nor considerations of simplicity or parsimony warrant it.An IBE proved implausible.Unqualified realism about weak position values inevitably conflicts with dBBT's default ontology.
We are thus left with at best a considerably weaker position, one close to Bricmont's (2016, p. 136): "[Weak velocity measurements via Wiseman's protocol] (are) not meant to 'prove' that the de-Broglie-Bohm theory is correct', because other theories will make the same predictions, but the result is nevertheless suggestive, because the predictions made here by the de Broglie-Bohm theory is [sic] very natural within that theory [...]." Understanding that suggestiveness and "naturalness" possess scant epistemic or even non-subjective import, we concur.With such a verdict, however, one has relinquished the initial hope that weak measurements per se have a fundamental bearing on whether standard dBBT or one of its alternative versions are true.

Conclusion
Let's recapitulate the findings of this paper.
We started from the empirical underdetermination of dBBT's guidance equation.It poses a impediment to insouciant realism about the particles' trajectories, postulated by standard dBBT.We scrutinised whether Wiseman's measurement protocol for weak velocities is able to remedy this underdetermination by empirical or nonempirical means.Our result was negative.We elaborated that the reliability of weak velocities -the fact that they coincide with the particles' real velocitiespresupposes standard dBBT.For non-standard versions of dBBT, its presumption is generically false.Hence, weak velocity measurements don't qualify as evidence or confirmation in favour of the velocity field, postulated by standard dBBT.Weak velocity measurements thus don't allow for genuine measurements in any robust sense (at least given the present knowledge).Finally, we critiqued an interpretation of Wiseman's measurement protocol as a non-empirical argument for standard dBBT in terms of alleged theoretical virtues.Even if one grants the questionable appeal to some popular virtues, it remains equivocal that in the context of weak velocity measurements standard dBBT actually exemplifies them.Most importantly, the 3-Box Paradox demonstrated the dangers of any naïve realism about weak position values.
In conclusion, our paper has, we hope, elucidated the status of weak velocity measurements in two regards.On the one hand, they are indubitably an interesting application of QM in a novel experimental regime (viz.that of weak pointer-system couplings).They allow us to empirically probe the gradient of the system's wave function -irrespective of any particular interpretation of the quantum formalism.On the other hand, however, with respect to the significance of weak velocity mea-surements, we proffered a deflationary account: per weak velocity measurements shed no light on the status of standard dBBT.In particular, on their own, they don't provide any convincing support -empirical or non-empirical -for standard dBBT over any of its alternative versions.
6 Appendix: Weak measurements and weak values Methods of weak measurement have opened up a flourishing new field of theoretical and experimental developments (see e.g.Aharanov and Rohrlich 2005;Tamir and Cohen 2013;Svensson 2013;Dressel et al. 2014.Broadly speaking, weak measurements generalise strong measurements in that the final states of measured systems need no longer be eigenstates.In this appendix, we'll first provide a concise overview of weak measurements ( §6.1).In particular, we'll expound how they differ from the more familiar strong ones .In §6.2, we'll introduce notion of a weak value.

Strong versus weak
Strong or ideal measurements are closely related to the conventional interpretation of the Born Rule.Consider a quantum system S and a measuring device M with Hilbert spaces H S and H M , respectively.The Hilbert space of the total system is H = H S ⊗ H M .The system be in a normalized state |ψ⟩ before the measurement.We are interested in measuring an observable A represented by the self-adjoint operator Â, which has a complete and orthonormal eigenbasis {|c i ⟩}.In that basis the system's state reads ψ = i α i |c i ⟩ for some α i .Furthermore, we assume for simplicity the eigenstates are non-degenerate, i.e. have distinct eigenvalues.The only possible outcome of a strong measurement on this system is one of the eigenstates |c i ⟩.The corresponding probabilities to observe |c i ⟩ are (6.1) After the measurement was performed the system ends up in the final state |c i ⟩.This procedure is known as the von Neumann measurement (cf., for example, see the reprint (Von Neumann, 2018)).
In a weak measurement the interaction of system and measurement device is modelled quantum mechanically with the pointer device as an ancillary system on which a strong measurement is performed after the interaction.That is, assume that system and pointer interact via a von Neumann Hamiltonian Ĥ = g(t) Â ⊗ PM , (6.2) , where PM is conjugate to the pointer variable XM , i.e. [ XM , PM ] = iℏ.As before Â is the quantum operator of the observable to be measured, and g(t) a coupling Recall that the momentum operator acts as a shift operator (e − i ℏ aP M φ(x) = φ(x − a)).If the Gaussian peaks are narrowly localized and non-overlapping (to a good approximation), one can infer the state of the system from the pointer measurement.However, for weak measurements the Gaussians are assumed to widely spread over the pointer variable.The measurement outcome of the pointer is therefore consistent with the system being in states that are not eigenstates of the operator.This is read off from Equation 6.5.If, say, the pointer ends up at position 0, for example, we recover the initial state |ψ⟩ up to an overall factor.The two Gaussian amplitudes reduce to the same value.
For arbitrary systems with finite Hilbert space, the interaction generalises to where a i are the eigenvalues of the measurement operator Â.For simplicity, the free evolution Hamiltonian of system and pointer has been omitted; it would only give rise to additional total phases.So far the measurement scheme was standard.In Equation 6.6 no weakness is involved in particular.It becomes a weak one if the initial state of the pointer variable X M has a large spread σ.That is, the result of (strong) measurement on the pointer is not a projection onto eigenstates of the system.

Post-selection and two-vector-formalism
We may now introduce the notion of a weak value.A weak value of an observable Â is the result of an effective interaction with the system in the limit of weak coupling and a subsequent post-selection.Coming back to the simple case of the qubit, if the state in Equation 6.5 is post selected on |0⟩, for instance, the pointer ends up in a Gaussian lump centered around 1. Similarly, conditioned on |1⟩ the pointer is centered around −1, as one would expect from a strong measurement as well.Depending on the choice of the post-selected state, however, the pointer states are "reshuffled" and can be concentrated around mean values that can be far away from the eigenvalues of the observable Â.In the limit of large standard deviation σ the distribution is again Gaussian though.Importantly, the measurements on the pointer and the ones to find a post selected state are strong measurements in the sense defined above.For arbitrary post-selection on a final state |ψ f ⟩ the state of the total system evolves according to  (6.11) where (6.12) the salient quantity of the weak value of the observable operator Â.That is, after many runs, the pointer's average position is a w18 .In other words, |φ⟩ experiences the shift φ(x) → φ(x − a w ).Note that the probability amplitude to obtain |ψ f ⟩ in the post-selection is p = | ⟨ψ f |ψ i ⟩ | 2 .If the initial and final state of S are nearly orthogonal, the measurement may require many runs to find a w as the post selected state occurs only rarely.If there is time evolution of the target system between the weak interaction and the final measurement of ⟨ψ f |, then the expression would include ⟨ψ f | U , where U the unitary evolution operator: For a derivation, we refer the interested reader to literature.

Weak velocity and the gradient of the phase
We can manipulate the definition of the operationally defined weak velocity to give us the velocity of the guidance equation of standard dBBT.That is, for the unitary evolution Û (τ ) = e −i Ĥτ /ℏ during time τ (with the non-relativistic Hamiltonian of a massive particle Ĥ = p 2 2m + V (x)), the expression for Wiseman's operationally defined velocity reduces to (Wiseman, 2007, p. 5) v(x, t) = lim where ∇S(x) is the gradient of the phase of the wave function ψ(x).

Figure 1 :
Figure 1: A particle follows different trajectories corresponding to different/non-standard guidance equations.(a) The familiar wiggly deterministic trajectories that lead to the interference pattern in a double-slit experiment determined by the standard guidance equation.(b) Alternative trajectories obtained from adding a divergence-free vector field j := 1 x 2 +y 2

Figure 2 :
Figure 2: A weak velocity measurement for photons allows the reconstructions of trajectories, qualitatively identical to those of particles in standard dBBT.Particle trajectories in a double-slit experiment performed by Kocsis et al. 2011.

Figure 3 :
Figure3: The weak measurement procedure for a given post-selected state x(τ ) = X τ .The weak value is obtained from the distribution on the weak screen.When the velocity field is that of standard dBBT (j = 0), the actual position of the particle x(0) matches the weak value x w .For an alternative guidance equation (j ̸ = 0), it doesn't: the particle crosses the weak screen at a point x ′ (0), other than the weak value.This shows that depending on which guidance equation one chooses, the weak value needn't yield the actual position of the particle at time 0.
Flack and Hiley 2014 and Flack and Hiley 2016 have argued that Kocsis et al.'s experiments measure mean momentum flow lines.