First-principles construction of symmetry-informed quantum metrologies

Combining quantum and Bayesian principles leads to optimality in metrology, but the optimisation equations involved are often hard to solve. This work mitigates this problem with a novel class of measurement strategies for quantities isomorphic to location parameters, which are shown to admit a closed-form optimisation. The resulting framework admits any parameter range, prior information, or state, and the associated estimators apply to finite samples. As an example, the metrology of relative weights is formulated from first principles and shown to require hyperbolic errors. The primary advantage of this approach lies in its simplifying power: it reduces the search for good strategies to identifying which symmetry leaves a state of maximum ignorance invariant. This will facilitate the application of quantum metrology to fundamental physics, where symmetries play a key role.

Two examples are quantum thermometry [14][15][16][17] and rate estimation in dissipative processes [18,19].Since temperature and rate set energy and time scales [20,21], respectively, scale invariance becomes essential for consistent estimation, and this reveals a metrological framework for scales that is independent of phase estimation [22].This strongly suggests that different parameter types require different metrologies.
Far from a mere formality, this idea is proving crucial in the presence of finite information [23][24][25][26].The application of scale estimation to a thermometry experiment on 41 K atoms confined in an optical tweezer at microkelvin temperatures [27], for instance, has demonstrated how individual measurements can be made substantially more informative by enforcing the correct invariance via Bayesian principles [28,29].
We have circular invariance for phases [30][31][32][33], translation invariance for locations [34], and scale invariance for scales [22].But not every parameter falls under such categories.This is the case, e.g., of relative weights, that is, any η ∈ (0, 1) quantifying the relative importance of any two objects as η and 1 − η.Examples include probability of success [20], blend parameters in mixed states [35], photon loss in an interferometer [36], and the Schmidt parameter characterising the class of two-qubit pure states and their entanglement [37].
A simple, yet effective way of discovering new metrologies is to exploit the class of parameters that can be mapped into locations.If Θ is such a parameter, and its value is completely unknown, there will exist a function f: θ → f (θ) such that our initial state of knowledge is invariant under transformations for arbitrary c and where θ and θ ′ denote different but equally valid hypotheses about Θ.Such hypotheses are related by a transformation θ ′ = g(θ) that is determined by the physics at hand, and we say that f maps Θ into a location because Eq. ( 1) is a translation of f (θ) [28].Good measurement strategies can then be found by optimising the family of quadratic errors on average, where x and y = (y 1 , y 2 , . . . ) denote a measurement outcome and control parameters, respectively, and the map θy : x → θy (x) processes x into an estimate for Θ.
To demonstrate the power of this framework, a metrology of weights is derived from first principles.The symmetry (1) is shown to arise in this case from Möbius transformations, leading to a hyperbolic error and estimators based on the logistic function.Their application to the estimation of a blend parameter in a mixed state reveals a maximum precision gain of 75% relative to the prior uncertainty.This illustrates the key advantage of symmetry-informed estimation: if Θ is locationisomorphic, finding the best estimator and POM amounts to identifying the symmetry (1) and performing a single calculation of the optimal strategy using the rules reported here.
Elements of quantum metrology.-Wewish to estimate Θ.A finite hypothesis range θ ∈ [θ min , θ max ] is often available in practice, and other kinds of prior knowledge can be accounted for using principles such as maximum entropy [28,63].If, on the other hand, we start from minimal assumptions including the type of parameter Θ is-a scale, a weight, etc-and its general support, then we are maximally ignorant about its value [20].A prior probability p(θ) is used to encode any available (or the absence of) initial information.
The hypothesis θ is next encoded in a state ρ y (θ), which is often characterised by some control parameters y.Examples include preparation and readout times in magnetic field sensing [64], and expansion time in release-recapture thermometry [27].A POM M y (x) is performed on this state, and the outcome x is used to update the information in p(θ).This procedure provides the desired estimate θy ± ∆ θy , where ∆ θy denotes an outcome-dependent error.
A central problem in this context is finding estimators and POMs leading to the least error.The next section provides an exact, analytical solution to this for quadratic errors [Eq.( 2)].
Optimal strategy for quadratic errors.-Westart by integrating Eq. ( 2) weighted over θ and x as where We average over the hypothesis θ because Θ is unknown; this makes the error globally valid, i.e., for any parameter range.Similarly, we average over the outcome x because the search for optimal POMs takes place prior to recording a specific measurement outcome.Eq. ( 3) is a mean quadratic error.
To find the optimal strategy minimising this error, it is useful to rewrite it as where By virtue of Jensen's inequality, A y,f,2 − A 2 y,f,1 ≥ 0, Eq. ( 5) is lower bounded as But projective measurements-i.e., M y (x)M y (x ′ ) → δ(x − x ′ )M y (x ′ )-saturate Jensen's inequality.Therefore, we can assume equality in Eq. ( 7) and restrict the search to projective strategies without loss of optimality [45].
Using variational calculus, and following the formally analogous derivation in Refs [22,34], such an equality is found to achieve its minimum at where S y,f solves the Lyapunov equation Crucially, S y,f contains all the information about the optimal strategy, as follows.Given the eigendecomposition where P y,f (s)P y,f (s ′ ) → δ(s − s ′ )P y,f (s ′ ), and recalling the definition in Eq. (6c), Eq. ( 8) implies The optimal estimator is thus found by transforming the spectrum of S y,f via the inverse f map [Eq.(11a)], while the optimal measurement consists in projecting onto the eigenspace of S y,f [Eq.(11b)].Inserting Eq. ( 8) into Eq.( 7) further renders the associated minimum error εy,f,min = εp,f − G y,f (12) as the difference between the initial uncertainty εp,f , given by the prior variance of f , and the average precision gain Eq. ( 12) is useful, in addition, to assess the relative performance of suboptimal-but perhaps more practical-strategies via the trivial uncertainty relation εy,f,MQE ≥ εy,f,min .Eqs. ( 9), (11), and ( 12) are the main result of this work.They generalise Personick's framework [34] (as well as scale estimation [22]) and provide the optimal quantum strategy for any location-isomorphic parameter.This is next illustrated for weight parameters.
Weight estimation.-Consider a set with two generic elements, e 0 and e 1 , carrying weights η and 1 − η, respectively.Suppose η is unknown.To construct a quantum metrology for η, we first need a notion of maximum ignorance.
Let θ ∈ (0, 1) be a hypothesis about η.If we ask how likely it is that one would choose e 0 over e 1 , our probability for this is p(e 0 ) = θ.If a new piece of information I is provided-I denotes a proposition-p(e 0 ) can be updated to p(e 0 |I) via Bayes's theorem.But p(e 0 |I) = θ ′ can also be used as a hypothesis for η.This induces a Möbius transformation between hypotheses, with γ = p(I|e 0 )/p(I|e 1 ).By rewriting it as θ ′ /(1 − θ ′ ) = γθ/(1 − θ), we see that it amounts to rescaling the 'odds'.But this rescaling does not inform the value of η.We then say that our initial state of knowledge is invariant under odds transformations (14).This motivates the formal constraint p(θ)dθ = p(θ ′ )dθ ′ on the prior probability, which renders the functional equation Its solution, p(θ) , is sometimes referred to as Haldane's prior [65].This derivation was suggested by Jaynes [20] for a probability of success, and it has here been extended to any weight parameter.
We next apply symmetry-informed estimation.Let c 1 = 2 without loss of generality, and k = 2 for the error to be quadratic.This turns Eq. ( 3) into a mean hyperbolic error with f (z) = 2 artanh(2z − 1).Applying this f map to Eq. ( 14) reveals the translation symmetry f (θ ′ ) = f (θ) + c, with c = log(γ).Weight parameters are thus location-isomorphic.This implies that the best estimation strategy can be found by solving Eq. ( 9), for which Eq. (6b) takes the form ρ y,l = 2 l dθ p(θ)ρ y (θ) artanh(2θ − 1) l . ( Upon computing the eigendecomposition (10), the optimal strategy is given as where the optimal estimator is the logistic function.Eqs. ( 16) and ( 18) are the second result of this work-a quantum metrology for optimal weight estimation.Its application is next illustrated.
We shall now address the optimal estimation of η.
These precision gains can be exploited in practice by optimising individual shots in a finite sequence of them.Imagine, for example, a protocol rendering the measurement outcomes  19) using local measurements, i.e., given by the eigenstates of the symmetric logarithmic derivative L ŷ (η0) [1,44].Here, η0 represents an initial guess at η, assumed to lie in the range [0.01, 0.99], and ŷ is a unit vector with azimuthal angle α and polar angle β.The latter is fixed as β = π/2.Three azimuthal angles are chosen: α1 = 0 (dot-dashed line), α2 = π/4 (short-dashed line), and α3 = π/2 (long-dashed line).The prior error and the global minimum as per weight estimation correspond to the dotted and solid lines, respectively.Aside from the error for α1 at η0 = 1/2, which saturates the minimum, every other configuration is suboptimal.Worse, no information is sometimes retrieved, as illustrated by the errors for α2 and α3 when η0 → 0 and η0 = 1/2, respectively.This contrasts with symmetry-informed estimation, which readily identifies Eq. ( 23) as the globally optimal POM.s = (s 1 , . . ., s µ ), where s i = s ± .Following Refs.[16,22], the rule to simultaneously processing s into an optimal blend parameter estimate can be written as where p(θ|s, ŷ) ∝ p(θ) µ i=1 p(s i |θ, ŷ) is Bayes's theorem, and p(s i |θ, ŷ) = ⟨s ± | ρ ŷ (θ) |s ± ⟩.This a priori optimised approach has already been proven useful in Mach-Zehnder interferometry [24], qubit sensing networks [47], and the aforementioned thermometry experiment on cold 41 K atoms [27].
It is instructive to further compare the performance of the optimal projectors (23) with that of projecting onto the eigenspace of symmetric logarithmic derivatives (SLDs) L ŷ (η 0 ), as is done in local estimation [1,37].Here, η 0 is an initial 'hint' at η needed because the SLD is parameter dependent (the SLD is solution to L ŷ (z)ρ ŷ (z) + ρ ŷ (z)L ŷ (z) = 2∂ z ρ ŷ (z)).Fig. 1 shows the numerical mean hyperbolic error, as a function of η 0 , for the estimator (22a) and three SLD POMs labelled by their azimuthal angle as α 1 = 0 (dot-dashed line), α 2 = π/4 (short-dashed line), and α 3 = π/2 (long-dashed line).For all of them, β = π/2 and a = 0.01.The prior error (ε p,f , dotted line) and the global minimum (ε β,min , solid line) are also shown.As can be seen, the POM α 1 saturates εβ,min at η 0 = 1/2, but it is increasingly less informative as η 0 → 0, 1.The POMs α 2 and α 3 are always suboptimal and uninformative for η 0 → 0 and η 0 = 1/2, respectively, since the correspondent errors evaluate to εp,f .Local estimation cannot thus always identify universally optimal measurements.In summary, using symmetries in metrology can reduce the search for good strategies to finding the form of f and performing a single calculation of the kind in Eqs.(22).Metro- Table I.Symmetry-informed metrologies.Phase estimation applies to circular parameters (second column).For quantities isomorphic to location parameters, the prescription in the third column, together with Eqs. ( 9), (11), and ( 12), identifies the optimal strategies.It also unifies the metrologies of locations (f (z) = z), scales (f (z) = log(z/z0), with constant z0), and weights (f (z) = 2 artanh(2z − 1)), and it gives the theoretical support needed to discover new metrologies under minimal assumptions.Note that m ∈ Z and c ∈ R.
logical tasks such as identifying fundamental precision limits and informing the design of experimental protocols follow straightforwardly.This is the final result.
Concluding remarks.-Symmetry-informedestimation is put forward as a universally optimal framework for locationisomorphic metrology.Eqs. ( 9), (11), and ( 12) enable the direct calculation of the best estimator and POM, together with the corresponding minimum error.Having made minimal assumptions, these apply to any parameter range, prior information, or state, including multiple copies [40].Furthermore, fixed-POM estimators such as Eq. ( 24) indicate that the notion of location-isomorphic parameter is also relevant for classical measurements.Despite its single-shot formulation, this framework is straightforward to use in practice, either by repeating an a priory optimised strategy, as in Eq. ( 24), or using adaptive schemes [17,67] where each shot is optimised by maximising the precision gain (13).In general, this will reduce the number of runs needed to achieve a good precision in experiments measuring location-isomorphic parameters, thus enabling a better resource allocation.
Combining this framework with phase estimation [Tab.(I)] offers an unprecedented extension of the class of exactly solvable problems in Bayesian metrology.This covers ubiquitous quantities such as phases, locations, scales, and weights, but also any other parameter type for which invariance of our initial state of knowledge under Eq.( 1) holds.For instance, correlation coefficients ranging from −1 to 1, or if invariance under reparametrisations of some statistical model is desired (using information geometry, this leads to f (z) = z dtF(t) 1/2 , where F is the Fisher information [68]).Moreover, this capacity to accommodate physical symmetries enables the rigorous application of quantum metrology to fundamental problems such as the detection of dark matter [69,70].Overall, symmetry-informed estimation is not unlike the use of symmetries to derive the correct Euler-Lagrange equations in theoretical mechanics.
Boeyens, S. Michaels, L. A. Correa, and M. Perarnau-Llobet for helpful comments, and the participants of the QUMINOS workshop for insightful discussions.Parts of this manuscript were written during a visit to the Open Quantum Systems Group at the University of La Laguna.This work was funded by the Surrey Future Fellowship Programme.

Figure 1 .
Figure1.Mean hyperbolic errors for the estimation of η in Eq.(19) using local measurements, i.e., given by the eigenstates of the symmetric logarithmic derivative L ŷ (η0)[1,44].Here, η0 represents an initial guess at η, assumed to lie in the range [0.01, 0.99], and ŷ is a unit vector with azimuthal angle α and polar angle β.The latter is fixed as β = π/2.Three azimuthal angles are chosen: α1 = 0 (dot-dashed line), α2 = π/4 (short-dashed line), and α3 = π/2 (long-dashed line).The prior error and the global minimum as per weight estimation correspond to the dotted and solid lines, respectively.Aside from the error for α1 at η0 = 1/2, which saturates the minimum, every other configuration is suboptimal.Worse, no information is sometimes retrieved, as illustrated by the errors for α2 and α3 when η0 → 0 and η0 = 1/2, respectively.This contrasts with symmetry-informed estimation, which readily identifies Eq. (23) as the globally optimal POM.