True precision limits in quantum metrology

We show that quantification of the performance of quantum-enhanced measurement schemes based on the concept of quantum Fisher information yields asymptotically equivalent results as the rigorous Bayesian approach, provided generic uncorrelated noise is present in the setup. At the same time, we show that for the problem of decoherence-free phase estimation this equivalence breaks down and the achievable estimation uncertainty calculated within the Bayesian approach is by a $\pi$ factor larger than that predicted by the QFI even in the large prior knowledge (small parameter fluctuation) regime, where QFI is conventionally regarded as a reliable figure of merit. We conjecture that the analogous discrepancy is present in arbitrary decoherence-free unitary parameter estimation scheme and propose a general formula for the asymptotically achievable precision limit. We also discuss protocols utilizing states with indefinite number of particles and show that within the Bayesian approach it is legitimate to replace the number of particles with the mean number of particles in the formulas for the asymptotic precision, which as a consequence provides another argument that proposals based on the properties of the QFI of indefinite particle number states leading to sub-Heisenberg precisions are not practically feasible.


Introduction
The capability of performing precise measurements is the cornerstone of modern physics. Unlike classical physics, quantum mechanics provides insight into the fundamental limits on the achievable measurement precision that cannot be beaten, irrespective of the extent of any future improvements in measurement technology. The paradigmatic example is that of the optical phase measurement. Within the quantum optical framework the phase of a given state of light can only be defined up to a precision that scales as N 1 where N is the characteristic number of photons (proportional to the mean energy) of a given state [1,2]. This fact has profound implications for the performance of any metrological scheme based on optical interferometry where the difference of optical phase delays in the respective arms of an interferometer is being sensed. The achievable phase difference estimation precision is bounded by Δφ ⩾ N 1 which is referred to as the Heisenberg limit [3][4][5], as it may be informally viewed as a version of the Heisenberg uncertainty relation adapted to the phasephoton number case. The presence of decoherence, however, which may be due to noise or experimental imperfections, typically prevents quantum-enhanced measurement schemes from reaching the Heisenberg scaling, and it may be demonstrated that for the generic uncorrelated noise processes, classically scaling bounds Δφ ⩾ N const hold, limiting quantum enhancement to a constant factor precision improvement [6][7][8]. Many of the bounds derived in the field of quantum metrology, including the ones mentioned above, are applications of the celebrated quantum Cramér-Rao (C-R) bound [9,10] which is based on calculation of the quantum Fisher information (QFI). The quantum C-R bound, however, may not be saturable in general; hence having derived the bounds it is highly relevant to ask whether there are explicit quantum estimation schemes that lead precisely to the minimal uncertainty predicted by the bounds. This issue was particularly important in the recent discussion on the possibility of reaching a sub-Heisenberg precision scaling, Δφ = α N 1 with α > 1, where C-R-based bounds apparently indicated such a possibility in certain settings involving indefinite particle number states [11][12][13], while at the same time it has been shown that the corresponding estimation scheme would require prior knowledge of the order of the value of the parameter estimation precision itself, limiting the practical usefulness of the proposals [2].
The main goal of the present paper is to systematically investigate the problem of saturability of the quantum C-R-based bounds discussed in the quantum metrology literature. For this purpose we take the Bayesian approach to quantum estimation problems and make a connection between the predictions of the Bayesian approach and that of the C-R bounds. Since the solution to the Bayesian quantum estimation problem carries within itself an explicit description of the estimation protocol reaching the optimal precision, it leaves no doubts on the issues of saturability. Admittedly, the Bayesian approach involves some degree of arbitrariness in defining the prior probability distribution describing the initial knowledge on the parameter value. However, for sufficiently regular priors, their exact form is not expected to affect the results valid in the asymptotic regime of large resources, → ∞ N , and hence one can draw conclusions on asymptotic scaling which are independent of the form of the prior distributions.
Although C-R-based approaches dominate the quantum metrology literature, there are also notable examples of a successful application of the Bayesian paradigm. For example, it has been demonstrated within the rigorous Bayesian estimation framework that assuming a complete prior ignorance of the value of the estimated phase, one can at best reach Δφ π = N asymptotic precision in decoherence-free phase estimation [14], which reveals a discrepancy by a factor of π between this result and the C-R-based Heisenberg bound. On the other hand, if losses are taken into account, the Bayesian approach [15] yields the same asymptotic precision limit as is predicted by the C-R-based bounds [7,8].
In this paper we put these observations into a wider context. We prove that in the decoherence-free phase estimation the asymptotic π factor discrepancy is not due to a particular choice of prior distribution in the Bayesian setting, but holds also for arbitrarily narrow regular priors. We demonstrate this rigorously for Gaussian priors and support the conclusions with numerical evidence obtained for other priors as well. Going beyond the phase estimation scheme, we also conjecture a general formula for the optimal precision of arbitrary decoherence-free unitary parameter estimation that is expected to be asymptotically saturable. More importantly, we show that in the presence of generic uncorrelated noise which yields N const precision scaling, the C-R bounds coincide with the precisions achievable by the asymptotic Bayesian strategies and thus may be treated with full confidence. We finally prove that the above conclusions apply also to metrological models involving indefinite numbers of particles, where N appearing in the formulas for asymptotic precision may be confidently replaced with the mean number of particles N , providing an alternative argument against the feasibility of sub-Heisenberg estimation procedures. This paper is organized as follows. In section 2 we introduce a local estimation approach based on the application of the C-R bound and the use of the QFI. In section 3 we describe Bayesian procedures which provide an alternative way of defining precision. Section 4 contains results about the asymptotic equivalence of the two approaches in the presence of uncorrelated decoherence, whereas in section 5 we discuss the differences that arise between them in the decoherence-free case. Section 6 contains an illustrative example to show that in some cases, such as in the presence of correlated dephasing process, one cannot expect asymptotic irrelevance of the Bayesian prior and hence cannot compare the C-R and the Bayesian approaches in a meaningful way. Section 7 contains a generalization of the results obtained to states with indefinite numbers of particles. Section 8 summarizes the paper. Figure 1. The basic scheme of quantum metrology. The N-particle state ψ | 〉 N is sent through (a) a general quantum channel and (b) N parallel quantum channels, inscribing the parameter value φ as well as causing decoherence independently on each of the probes, resulting in the output state ρ φ N . Making the measurement Π x on the output state allows one to construct an estimate φ x ( )based on the measurement result x.

The Cramer-Rao bound approach
For the purpose of this paper we consider a general estimation scheme, depicted in figure 1, which is relevant for optical interferometry as well as more general quantum metrological protocols. An N-particle probe state ψ | 〉 N undergoes an evolution described by the action of a quantum channel Λ φ N . A general quantum measurement, N and on the basis of the measurement result x, one estimates the value of an unknown parameter using an estimator function φ x ( ). The main goal in quantum estimation theory is to find the minimal achievable estimation error Δφ optimized over the choice of probe states, measurements and estimators. In general this is a very difficult task and therefore lower bounds on the estimation error are often considered instead of exact precision limits. The effort to derive new useful quantum metrological lower bounds is an active field of research; see e.g. [16][17][18][19]. Still, the bound most commonly used in the literature is the long-serving quantum C-R bound [9,20] Δφ where k is the number of independent repetitions of the experiment and φ F is the QFI, while φ L is the symmetric logarithmic derivative (SLD) operator defined implicitly by the equation It is worth emphasizing that this bound is saturable under some general conditions, which will be stated at the end of this section. Deriving fundamental precision bounds using the QFI amounts to optimization over input probe states ψ | 〉 N that yield the maximal φ F . This task is relatively simple in the case of decoherence-free unitary parameter H exp (i ), as the QFI may be related directly to the variance of the evolution generator H. In this case the optimal C-R bound takes the form [5]: with λ + and λ − being respectively the maximal and the minimal eigenvalues of H, while the optimal input state is defined using the corresponding eigenstates ψ . In a special case of optical interferometry where each photon is represented by a two-level system, with levels corresponding to the photon traveling in one arm of the interferometer or the other, σ = H 2 z and we recover the previously mentioned Heisenberg bound Δφ ⩾ N 1 , whereas the optimal input probe state is the so-called N N 00 , where the last form of the state is written in the mode occupation basis. This result is usually contrasted with the precision achievable with uncorrelated input probes ψ ψ | 〉 = 〉 ⊗ | N N , where the maximal QFI scales linearly with N, , and hence bounds the achievable precision via a N 1 scaling formula, characteristic for classical estimation problems, where N is a number of independent and identically distributed (i.i.d.) observations.
The situation is much more involved when decoherence is taken into account. Even though there are various methods that allow one to tackle the problem numerically with reasonable efficiency [21][22][23][24][25], any numerical approach breaks down in the asymptotic regime of large N. Fortunately, in recent years, powerful analytical methods have been developed that allow one to find the maximal achievable QFI in the regime of large N [7,8,26]. These techniques allowed the derivation of analytical precision bounds for a number of important models in quantum metrology, including ones for lossy optical interferometry and atomic interferometry in the presence of dephasing, setting useful benchmarks for the whole field of quantum-enhanced metrology. Of particular interest are uncorrelated noise models, where figure 1. In this case it can be shown [8,[27][28][29] that generically the asymptotic scaling of the QFI is always linear, α = →∞ F N · N , and hence any quantum-enhanced benefits resulting from the use of entangled states are bounded by a constant factor gain over 'classical' protocols which utilize uncorrelated probes: The predictive power of the QFI bounds (2) and (3) depends crucially on how tight they are and whether they can in principle be saturated. Since the bounds are obtained by maximization of the QFI over input states, this translates to the question of whether the QFI is indeed a proper measure quantifying the performance of quantum-enhanced measurement protocols.
In principle the C-R bound, (1), can be saturated in the limit of many independent experiments, → ∞ k , by using the maximum likelihood estimator and performing the measurements in the eigenbasis of φ L [9,20,30]. The practical implications of this statement are far from obvious, however. The QFI is a point estimation concept that depends only on ρ φ N and i.e. local properties of the output state with respect to the parameter at a given parameter value φ. Saturating the C-R bound may therefore require unrealistically good prior knowledge on the value of the estimated parameter. This aspect is most pronounced when analyzing the behavior of the phase estimation using the N N 00 states, which are invariant under π N 2 phase shifts and hence require prior knowledge of the parameter value to be of the order of N 1 as well. Additionally, since φ L in general depends on φ, so can the optimal measurement, and again significant prior knowledge may be required to perform the optimal measurement. Last but not least, in order to quantify the performance in terms of the total resources consumed, i.e. = N kN tot , one needs to know the behavior of the number of repetitions k required to saturate the C-R bound with increase of N tot , which is nontrivial and in general does not lead to analytical formulas. Specifically, to claim the Heisenberg limit in terms of N tot , k should not increase with N tot up to infinity [31].

The Bayesian approach
An alternative analysis of the performance of quantum-enhanced measurement schemes that does not suffer from the above mentioned deficiencies, and hence yields the practically achievable precision limits, is the Bayesian approach where one explicitly takes into account the prior knowledge about the parameter value, represented by a probability distribution φ p ( ) [14,15,[32][33][34]. In this case, we define the average Bayesian error as Here one averages the error (precision) for some particular value of the parameter with the prior probability over the whole range of possible values of φ. In the case of broad priors, one is therefore interested in finding strategies that work globally, for many different values of the parameter, rather than locally as in the case of the approach based on the C-R bound. Finding the minimal Δφ requires optimization over the input state, measurements and estimators, which in general is much more demanding than maximization of the QFI over input states. Yet, contrary to the situation for the QFI case, once the solution is found it yields an explicit estimation procedure that saturates the minimal average Bayesian error.
Under certain regularity conditions, one can relate the Bayesian and C-R bound approaches through the socalled Bayesian C-R bound: [35] Provided the prior is smooth enough and it vanishes on the boundary of the set of allowed values of φ, the prior-dependent term  is finite and in the asymptotic limit of → ∞ N becomes negligible as compared with φ F . Moreover, in the unitary parameter estimation when the noise acts before the parameter encoding (or these commute), i.e. when Λ F, irrespectively of the presence or absence of decoherence, and hence (5) takes the form N implying that the Bayesian error is asymptotically also bounded by the standard C-R bound. We now ask whether it is possible to achieve equality in the above bound and hence prove asymptotic saturability of the C-R bound. Intuitively, this should be the case, because for very narrow priors φ p ( ), one should be close to the local regime and the C-R bound and Bayesian approaches should give similar results. The same should hold in the asymptotic limit of a large number of probes fed into the setup, because than the amount of information gained from experiments is much larger than that of any a priori knowledge available in advance, a result known in classical parameter estimation as the Bernstein-von Mises theorem [37].

Estimation in the presence of decoherence
Let us consider first the situation when the maximal QFI scales asymptotically at most linearly with N, which is a generic case for metrological models with uncorrelated noise [8,27,28]. Since the QFI is additive on product states, , this implies that for a sufficiently large N instead of taking a general entangled state of N particles ψ | 〉 N , one could take separable state of k copies ('groups') of an entangled state ψ 〉 | n with a smaller number of particles n = N/k and achieve almost the same QFI. More formally, let us expand the optimal asymptotic QFI in powers of N taking into account the leading correction to the linear asymptotic scaling which without loss of generality may be written as [26,38]), with β γ > , 0. The grouping procedure would not change the optimal QFI by more than ϵ, which implies that for any ϵ > 0 the size of the group n can be assumed to be finite in the asymptotic limit → ∞ N , while the number of groups k grows to infinity proportionally to N. Therefore the estimation problem in the asymptotic limit can be effectively viewed as a parameter estimation problem for a large number of independent and identical copiesρ φ ⊗ ( ) n k . In this case, however, under some regularity conditions for the Bayesian model [39,40] there is a Bayesian estimation strategy that is asymptotically efficient and saturates the C-R bound. For this purpose one can e.g. refer to an elegant quantum local asymptotic normality theorem [41,42] which states that in the asymptotic limit the estimation problem for uncorrelated copies may be equivalently viewed as an estimation problem for multimode quantum Gaussian states with the estimated parameter being encoded in a displacement of the state. The optimal estimation strategy then amounts to a measurement of a particular quadrature operator yielding a Gaussian probability distribution with the variance determined by the QFI. This proves that the QFI-based bound (3) is indeed asymptotically saturable and allows us to rewrite it as an equality for the asymptotically achievable Bayesian cost with the constant in the numerator unchanged:

N
We depict in figure 2, as an example, the precision limits for phase estimation for N two-level systems under two different decoherence models: (i) losses and (ii) uncorrelated dephasing, where it is clearly seen that for large N the respective Bayesian cost and bound given by the QFI do indeed converge. The effective numerical approach that allows one to obtain exact results for a large number of particles and the details of the models are discussed in the appendices.
We have mentioned before that linear scaling of the QFI, which is a crucial assumption in the above equivalence argument, is generic in models with uncorrelated noise. However, in problems where the decoherence strength may be tuned with the increase of N, as e.g. in frequency estimation schemes where one is allowed to optimize over the probe interrogation time, the situation may be different. This is the case in e.g. perpendicular dephasing [43] and non-Markovian evolution [44] models, where in the limit of increasing number of probes, a choice of properly decreasing interrogation times may effectively reduce the impact of decoherence and allow the QFI to scale better than linearly. In such cases a dedicated analysis, which is beyond the scope of the present paper, is required in order to relate the Bayesian and the C-R bound approaches.

Decoherence-free estimation
Let us now consider the decoherence-free case, Λ = φ φ U . Since the QFI scales quadratically with N, we can no longer apply the previous argument about asymptotic 'group' structure of the optimal input state. Interestingly, for phase estimation = φ σ φ U e i 2 z and the flat prior φ π = p ( ) 1 2 , it is possible to derive the optimal Bayesian solution analytically, utilizing the concept of covariant measurements (see appendix A), which asymptotically yields Δφ π = →∞ N N [14]. This asymptotic result is larger by a factor of π than the value of the respective C-R bound; see (2). One might argue that this discrepancy arises due to the assumption of the flat prior in the Bayesian approach and that by narrowing the prior one might eventually achieve the exact N 1 scaling. We show below that this intuition is wrong, by considering arbitrarily narrow Gaussian priors and proving that the asymptotic scaling remains π N , which demonstrates that the C-R bound is not achievable in this case.
Consider a Gaussian prior φ = so it is narrow enough that we can neglect the tails outside the interval π π − ( , ). For unitary parameter estimation with a Gaussian prior and quadratic cost, there is a close relation between the Bayesian cost and the QFI [23]: Looking for the minimal Δφ is therefore equivalent to determining the input state for which ρ F (¯) is maximal. Since ρ may also be formally viewed as the input probe state subjected to collective dephasing, we can utilize the asymptotic formula for the optimal QFI for phase interferometry under collective dephasing derived in [26], which reads = Γ π + F N 1 2 2 , where the dephasing strength parameter Γ needs to be replaced with the prior variance Δ 0 2 . Substituting this result into (9), we get irrespectively of the width of the prior distribution. The assumption of Gaussianity of the prior was needed for formal derivation of the above result, but we conjecture that the above holds for general sufficiently regular prior distributions. This is intuitively obvious since we get the same results for flat priors and all Gaussian priors including arbitrarily narrow ones. Therefore it is natural to expect all intermediate cases to manifest the same behavior. This means that in the decoherence-free case, the correct limit on the phase estimation error is given by π N , and not N 1 . Numerical results confirming this reasoning, obtained using the techniques of [34], are illustrated in figure 3.
Moreover, on the basis of numerical calculations, we conjecture that the π factor discrepancy between the C-R bound and the asymptotically saturable precision derived for the phase estimation problem holds in general for any decoherence-free unitary parameter estimation = φ φ − U e H i , and the correct form of the optimal asymptotically achievable uncertainty reads N irrespectively of the prior. Intuitively, we see that by performing preliminary measurements on a negligible portion of the particles we can narrow the prior distribution to have width of the order of π λ λ − + − 2 ( ), so we will not suffer estimation ambiguity due to using eigenstates with just the extremal eigenvalues. After this preliminary procedure, the optimal strategy is isomorphic to the Bayesian phase estimation strategy up to the rescaling of the phase evolution speed by λ λ − + − . Formula (11) should thus be regarded as a refinement of the previously derived C-R-based bounds for unitary parameter estimation [5].

Estimation in the presence of global dephasing
In the above discussion we have considered models where channels Λ φ acting on different particles were uncorrelated, as in figure 1b. There are situations, however, where the setup cannot be decomposed into separate channels acting on each of the probes. This may be caused by the presence of the memory or some long distance correlations between channels. In such cases, making any general statement about the asymptotic value of the precision is nontrivial. In fact, as we will show, there are some cases in which one cannot define the asymptotic value of the precision without paying attention to the form of the a priori probability distribution, and hence one cannot make a meaningful connection between Bayesian approaches and ones based on C-R bounds.
As an illustrative example, consider a phase estimation problem in the presence of collective dephasing, such that 2 , as the convolution of two Gaussian distributions is again a Gaussian, with a variance that is the sum of those of the original two distributions. Plugging this formula into (9), we find the formula for the optimal Bayesian cost: showing a clear dependence on the prior knowledge except in the case where Γ Δ ≪ 0 2 , in which case the C-Rbased and Bayesian approaches predict the same asymptotic value for the precision, equal to Γ . This is due to the fact that the information on the estimated parameter does not increase indefinitely with N, and hence in the asymptotic limit the estimator will not approach the true value of the parameter. Then, it should be no surprise that for the peaked prior distribution Δ Γ ≪ 0 2 , the prior will dominate the resulting optimal precision, and hence no formula independent of the asymptotic prior exists for the precision; see figure 4. The same behavior will also be observed if collective dephasing is added on top of the uncorrelated decoherence processes.
The above reasoning clearly shows that the bound based on the QFI is of limited use in a 'single-shot' analysis of setups under collective decoherence, and a more suitable measure of precision is the Bayesian cost. It should be noted, however, that in practice one would avoid performing single-shot experiments employing states with a large number of particles N subject to collective decoherence. Instead, a preferable strategy would be to divide N into k groups with a smaller number of particles n = N/k and send them separately, making the collective decoherence act on each of the groups individually and thus restoring the precision Δφ ∼ Δφ k n ( ) , where Δφ n ( ) is the precision obtained with one group [26]. In this case, arguments presented in section 4 when discussing problems with uncorrelated noise apply, and the Bayesian prior will lose its significance in the limit of many experiment repetitions → ∞ k .

States with an indefinite number of particles
States with an indefinite number of particles, such as coherent or squeezed states, are natural candidates for use in optical implementations of quantum metrological schemes, as they are relatively easy to prepare with the state-of-the-art technology. In particular, causing squeezed and coherent states to interfere is at the moment the only feasible technique allowing us to benefit from the quantum features of light in devices operating in a large light intensity regime, such as gravitational wave detectors [45]. Interestingly, such protocols, despite their conceptual simplicity, offer practically optimal performance from the point of view of quantum-enhancement effects [46]. However, the mathematical analysis of the ultimate performance of protocols utilizing states with an indefinite number of particles is more involved than that for states with a definite particle number. Considering such states, one has first to decide whether quantum coherences between sectors of Hilbert space representing different total photon numbers are observable. We here take the position that the observability of such coherences necessarily requires the presence of an additional phase reference beam, which should therefore be counted among the resources for any interferometric experiment [47][48][49]. If the reference beam is not explicitly included in the resources, it should be regarded as absent, and a state with an average photon number N should be effectively treated as being an incoherent mixture (a direct sum in this case) of different states with definite photon numbers: Note that we may also consider such states in the case of protocols involving massive particles, for which coherent superposition of different particle number states is forbidden by the superselection rules, as they may represent a probabilistic scheme with different definite particle number states prepared with different probabilities. Importantly, most of the discussions that arise around the utility of states with indefinite particle number and, in particular, the feasibility of sub-Heisenberg strategies can be restricted to this class of states, as the essence of the problem lies in the possibility of mixing, not superposing, different particle number states.
The simplest example of reasoning based on the use of the QFI that can lead to claims for sub-Heisenberg precision in phase estimation involves a state which is a mixture of a vacuum state and the N-photon N N 00 state. As the terms in the mixture occupy orthogonal subspaces, the QFI for such a mixture is a weighted sum of QFIs for each of the constituents = − + F p pN (1 )0 2 , where p is the probability of sending the N N 00 state. The average photon number = N pN is treated as a fixed resource, and we can rewrite the QFI in the form = F NN . Hence for fixed N , we may increase N indefinitely, making the QFI arbitrarily large, which translates to a C-R bound with arbitrarily small estimation uncertainty. In practice, however, such strategies require an amount of prior knowledge that also increases with N, rendering them impractical [2,16]. Building on techniques presented in this paper, we show below an alternative argument indicating that states with indefinite particle numbers do indeed offer no asymptotic benefits over definite particle number strategies.
For optimal Bayesian phase estimation with a Gaussian prior, we may again utilize formula (9), as in its derivation no particle number definiteness was ever assumed. The optimal Bayesian cost for the ρ N state thus reads ince ρ N , just like ρ N , is a mixture of states occupying orthogonal subspaces, we can write . As we will be interested in the large N regime, let us first assume that all of the relevant terms in the mixture correspond to large N, and so the large N approximation to the optimal QFI of the N- here in the last inequality we have made use of the concavity property of the + x 1 (1 1 ) 2 function. This clearly demonstrates that there is no benefit in using mixtures, as the cost will be higher than the cost corresponding to a definite particle state with = N N . The above reasoning was based on the simplifying assumption that all relevant constituents of the mixture correspond to large N. This was clearly not the case in the elementary example presented before, involving a mixture with the vacuum state. Assume then that there is a finite M such that states with particle numbers < N M have a finite weight ϵ > p . Then we have that the cost Δφ ϵΔφ ⩾ N M , as the optimal cost for each of the < N M terms cannot be smaller than that for an optimal M-particle state, and including states with > N M only increases the cost. Therefore, on increasing N and heading for better precision we must necessarily decrease ϵ or increase M in order for the values not to be bounded from below by a finite uncertainty. This implies that the only way to have an estimation uncertainty that asymptotically goes to zero is to deal only with mixtures for which effectively all weight is carried by states with increasing N, and asymptotically no finite weight may be kept below any finite M. This supports the reasoning leading to (16) and excludes the possibility of better than Heisenberg scaling of the precision within the Bayesian approach.

Conclusions
In summary, we have proven that in the presence of uncorrelated decoherence, the asymptotic limits on the precision of quantum metrological schemes may be credibly calculated using an approach based on the C-R bound, whereas in the decoherence-free unitary parameter estimation, a π factor correction needs to be included irrespectively of the extent of the prior knowledge. These observations provide firm grounds for the use of the QFI as a sensible figure of merit in analyzing the performance of quantum-enhanced metrological protocols based on definite particle number states. In the case of strategies employing states with an indefinite number of particles, the claims remain unchanged in the presence of uncorrelated noise. In the decoherence-free case, however, the Bayesian analysis shows that proposals of sub-Heisenberg estimation strategies motivated by C-R bounds are not of much practical use, and that the actual Bayesian cost cannot scale better than π N , where N is the average number of particles.
where φ φ c x (˜( ), )is called a cost function. Here we considered cost functions of two types: quadratic cost , the latter naturally emerging for the problem of phase estimation due to the periodicity of the parameter (note that φ φ φ φ ≈ c c (˜, ) (˜, ) s whenever φ φ ≈ , so the asymptotically Bayesian cost for the sine cost function should be equal to that calculated with φ φ c (˜, )). For the problems that we have considered, dealing with the first function is hard and in general possible only numerically. On the other hand, the sine cost function greatly simplifies the problem for phase estimation and flat a priori knowledge, since one can restrict measurements to a class of covariant POVMs [10,47,50] parameterized by the estimated value and given by where Ξ is a positive semidefinite operator called the seed operator. Using the above formula, the average cost simplifies to With the help of the above equation, one may easily derive the average cost for the decoherence-free estimation [14], losses [15], global dephasing [49] or local dephasing (see appendix C). In general, for prior probability distributions of other types and more general unitary transformations, one has to use iterative algorithms similar to the one described above, which are described in detail in [34] for the sine cost function and in [23] for the quadratic cost function.
Appendix B. Derivation of the density matrix in the presence of dephasing The iterative algorithms described above reduce the optimization problem to a repeated solving of a matrix eigenproblem. Still, in order to fully utilize them, one needs to efficiently describe the output density matrices. This is particularly challenging in the case of local dephasing, where the output density matrix lies outside the fully symmetric subspace and its dimension in principle scales exponentially with the number of probes. Here we derive a way to efficiently describe the output density matrix for interferometric models under dephasing or loss for N two-level input probes prepared initially in a symmetric state, which can in general be written in the bosonic mode occupation notation ψ | 〉 = ∑ − 〉 = c n N n | , N n N n 0 . Local dephasing can be described using two single-particle Kraus operators of the form where η denotes the strength of the decoherence and σ z is the Pauli z operator. The N-particle output density matrix is equal to where π k N represents different permutations of k and − N k copies of the K 1 and K 0 operators respectively. To simplify the problem, we can treat our two-level probes as particles of spin1 2 and set up a notation in which we write the input state as a state with total angular momentum = j where α j denotes the multiplicity of the subspace with total angular momentum j. Eventually, we may express the output density matrix as   and scales only quadratically with the number of probes, as against the exponential scaling for the 'brute force' description. A similar formula but utilizing spherical tensors was also found in [25].
The case of losses is simpler. We model the loss of probes by inserting two artificial beam splitters in both arms of the interferometer, with transmissivities η and vacuum states fed into the respective second input ports. By a standard beam splitter transformation and tracing out the environment, one may easily derive the output density matrix as [21] ∑ ∑ with p l l 0 1 a normalization factor and with l l , 0 1 representing the number of photons lost in the respective arms. Such a density matrix has dimension + + N N ( 1)( 2) 2, which is again quadratic in the number of probes, and thus it is feasible to use it in iterative procedures.