Energy-efficient quantum frequency estimation

The problem of estimating the frequency of a two-level atom in a noisy environment is studied. Our interest is to minimise both the energetic cost of the protocol and the statistical uncertainty of the estimate. In particular, we prepare a probe in a"GHZ-diagonal"state by means of a sequence of qubit gates applied on an ensemble of $ n $ atoms in thermal equilibrium. Noise is introduced via a phenomenological time-nonlocal quantum master equation, which gives rise to a phase-covariant dissipative dynamics. After an interval of free evolution, the $ n $-atom probe is globally measured at an interrogation time chosen to minimise the error bars of the final estimate. We model explicitly a measurement scheme which becomes optimal in a suitable parameter range, and are thus able to calculate the total energetic expenditure of the protocol. Interestingly, we observe that scaling up our multipartite entangled probes offers no precision enhancement when the total available energy $ \mathcal{E} $ is limited. This is at stark contrast with standard frequency estimation, where larger probes---more sensitive but also more"expensive"to prepare---are always preferred. Replacing $ \mathcal{E} $ by the resource that places the most stringent limitation on each specific experimental setup, would thus help to formulate more realistic metrological prescriptions.


Introduction
While (classical) metrology is concerned with producing the most accurate estimate of some relevant parameter, quantum metrology is aimed at exploiting genuinely quantum traits to go beyond classical metrological limits [1,2,3]. Classically, there would be no difference between running some estimation protocol sequentially N times on one probe, and running the same protocol simultaneously on n (uncorrelated) copies of that probe for M = N/n rounds. Quantum-mechanically, however, such n-partite probe can be prepared in an entangled state, so that its estimation efficiency grows superextensively. ‡ Here 'super-extensive' stands for faster-than-linear in the probe size, and the 'estimation efficiency' is proportional to the inverse of the mean squared error.
More precisely, under rather weak conditions, the statistical uncertainty of the estimate of some parameter y =ȳ ± δy may be tightly lower-bounded as δy ≥ 1/ M F y (O O O) [10,11], where F y (O O O) denotes the Fisher information of a sufficiently large number M of measurements of the observable O O O on the n-partite probe. Importantlyalthough often disregarded-the length M of the dataset used to build the estimate will always be capped by the limited availability of some essential resource R; that is, if r is the amount of resource consumed per round, M = R/r and hence, δy ≥ 1/ √ R η R , were η R ≡ F y (O O O)/r is the estimation efficiency. A scaling such as η R ∼ n c , with c > 1, would be the hallmark of quantum-enhanced sensing.
For instance, when it comes to frequency estimation, the total running time T is usually regarded as the resource to be optimally partitioned [12]. Note that, even if features such as the amount of entanglement, coherence [27], or squeezing [28] in the initial state of the probe, or the internal interaction range among its constituents [4,5,6,7,9] could all be regarded as legitimate metrological resources, these do not fit in our framework. That is, even if, e.g., the amount of entanglement in the preparation of an n-partite probe was severely limited in practice, this would not cap the number of rounds M of the estimation protocol-a fresh copy of the same entangled state would be supplied at the start of every iteration until either time, the overall number of probe constituents, or the available energy have been fully consumed.
In our case, we shall look precisely at the total energy consumed E , and show that the notion of optimality that follows from the maximisation of an energy efficiency differs fundamentally from the one based solely on the portioning of the available time. In particular, while the maximisation of a time efficiency encourages the use of multipartite entangled probes with n as large as possible, energetic considerations advice against it-the high costs associated with the creation and manipulation of large multipartite correlated states does not pay off from the metrological viewpoint. In this way, we put into qualitative terms the intuitive notion that multi-particle entanglement-enabled metrology may not always be practical [29].
In particular, as illustrated in figure 1, we consider an ensemble of n initially thermal two-level atoms that are brought, through a sequence of qubit gates, into a sensitive GHZ-diagonal state [30]   The system is left to evolve freely for a time t under a noisy environment according to a master equation with a memory kernel; this amounts to the action of the phase-covariant channel Λ, which imprints a phase φ = ωt on the qubits while inducing dissipation effects, overall transforming the state of the system into ρ ρ ρ 4 . (c) A pre-measurement sequence of qubit rotations, CNOT gates, and a rotated Hadamard on the control qubit is applied, leading to the state ρ ρ ρ 6 ; each rounded rectangle (ζ) indicates a single-qubit rotation by an angle ζ, described by the unitary e −iζσ σ σz/2 . The system is finally measured in the energy basis to estimate the frequency ω with optimal efficiency. evolve freely under the action of time-non-local covariant noise. Specifically, we resort to a phenomenological quantum master equation [31,32,33] which explicitly accounts for memory effects and gives rise to a non-divisible dissipative dynamics [33] (see section 2.2 for full details). We then devise a measurement protocol consisting of a sequence of qubit gates followed by an energy measurement (cf. section 2.3). We further provide the specific measurement setting for which this scheme becomes optimal for frequency estimation in a suitable parameter range (cf. section 2.4). By looking at the changes in the average energy of the probe during the preparation and measurement stages, we explicitly obtain the total energetic cost per round. We find that adjusting the free evolution time so as to maximise the time efficiency of the protocol does lead to a super-extensive scaling in the probe size; specifically n 3/2 or 'Zeno scaling' [18,19]. In contrast, the energy efficiency of the very same probe, decays monotonically with n, even when the time is chosen to maximise it (see section 3). Interestingly, note that the observed super-extensive growth of the time efficiency is attained while starting from thermal qubits that are prepared into a GHZ-diagonal state.
In an accompanying article [34] the same super-extensive growth of the time efficiency is found for an arbitrary set of qubits prepared in a GHZ-diagonal state for frequency estimation in a noisy environment. The GHZ-diagonal state had been conjectured to be optimal for phase estimation with mixed probes in the absence of noise [30]. Here, we show that they lead to optimal scaling even in a noisy scenario. We also observe that, in our setting, memoryless 'Markovian' dissipative dynamics generally produces less efficient estimates, thus suggesting that memory effects might be beneficial for the energy efficiency of parameter estimation (cf. section 3).

Probe initialisation
The system of interest is an ensemble of n non-interacting two-level atoms thermalised at temperature T , whose frequency ω needs to be estimated. For simplicity of notation we shall set and the Boltzmann constant k B to 1 in all what follows. Each atom has a Hamiltonian h h h = ω 2 σ σ σ z and is initially in the state where the polarization bias = tanh ( ω 2T ) so that ∝ exp (−h h h/T ), and σ σ σ z denotes the z Pauli matrix. The global Hamiltonian is H H H = ω 2 J J J z , where J J J z = σ σ σ z ⊗ 1 ⊗n−1 + 1 ⊗ σ σ σ z ⊗ 1 ⊗n−2 + · · · + 1 ⊗n−1 ⊗ σ σ σ z and the total initial state is simply where we have labelled the first atom as c for 'control qubit' while the rest are tagged r, for 'register'. We shall prepare our n-atom probe in a GHZ-diagonal state by means of a CNOT transformation, followed by a Hadamard gate and a further CNOT [see figure 1(a)] [30]. That is, we first apply the unitary |0 c 0| c ⊗ 1 ⊗n−1 + |1 c 1| c ⊗σ σ σ ⊗n−1 x on ρ ρ ρ 0 . Introducing the denotationĀ Ā A ≡ σ σ σ x A A Aσ σ σ x , this yields Then, the Hadamard transformation U U U H ≡ 1 √ 2 (σ σ σ x +σ σ σ z ) ⊗ 1 n−1 acts solely on the control qubit: and finally, the second CNOT transformation leads to where the missing elements are just Hermitian conjugates of the opposite corners of each matrix. The resulting state will subsequently undergo dissipative evolution (cf. section 2.2) before being interrogated. As we will see in section 2.2, our model of dissipation gives rise to phase-covariant dynamics. It is known that the mean squared error of frequency estimated with this type of noise can be tightly lower-bounded below the standard quantum limit [20,19]. It was further shown that this bound is asymptotically saturable by using (pure) GHZ input states. On the other hand, (mixed) GHZ-diagonal states such as ρ ρ ρ 3 were found to perform well-and conjectured to be optimal-in noiseless phase estimation with mixed probes [30]. In section 3 we will illustrate that the optimal 'Zeno scaling', introduced in references [18,19], can also be attained with such GHZ-diagonal states.
Even though in the present paper we will limit ourselves to GHZ-diagonal preparations, it seems interesting to compare the size scaling of the metrological performance of different preparations. One would certainly find that some preparations may allow for a more energy-efficient estimation than others at fixed probe size. Unfortunately, as we will see below, our calculations rely heavily on the simple analytical structure of GHZ-diagonal states undergoing phase-covariant dissipation. This makes it difficult to extrapolate our results to other initial states.
Finally, note that the energetic cost of this initialisation stage E init = tr {H H H(ρ ρ ρ 3 −ρ ρ ρ 0 )} is linear in the probe size and evaluates to At this point, one may wonder why do we not cool down probes to the ground state before starting the estimation protocol so as to work with pure rather than mixed states. This could certainly be done (e.g. by coherent feedback cooling), so long as the corresponding energy cost E cool is added to the total energetic bookkeeping-just like (6), E cool would scale linearly in n. Such cooling stage is anyway not essential, and we will keep it out of the picture in what follows, thus avoiding to model it explicitly.

Free evolution
2.2.1. Phenomenological master equation-In order to account for the environmental effects in our probe, we will assume that each atom evolves according to a timenonlocal master equation [see figure 1(b)] with a phenomenological exponentiallydecaying memory kernel [31]. The reason for this choice is that the resulting dissipative dynamics is phase-covariant, as opposed to the one following from a more canonical setting, such as the spin-boson model [35,21]. This will eventually allow us to establish a connection with known results in the literature [20]. Moreover, due to its simplicity, the model considered here can be solved exactly.
At this point, one may still wonder why not to choose an arguably more realistic non-covariant noise model derived from first principles, as in reference [21]. It must be noted that-unlike in [21]-we need to know the explicit form of the time-evolved state for arbitrarily large probes. This is a prerequisite for gauging the energy cost of the measurement stage, and, eventually, assessing the asymptotic scaling of the overall estimation efficiency. A noise model lacking the "niceties" of covariant channels not only does compromise our ability to analytically evolve the state of the probe, but is also likely to render our proposed measurement scheme sub-optimal. On the plus side, however, covariant dissipation follows quite naturally from generic noise models whenever the ubiquitous rotating-wave approximation is well justified [35,21]. Furthermore, as it can be seen by comparing [20] with [34] and our results below, the details of the specific covariant dissipation model do not seem to affect the qualitative asymptotic features of the estimation protocol.

2.2.2.
Connection to the damped Jaynes-Cummings model-The seemingly arbitrary choice of memory kernel in equation (7) may be justified by considering the damped Jaynes-Cummings model on resonance; that is, a two-level atom in an empty and leaky cavity. This setup can be effectively described by the Hamiltonian where and the system-bath coupling constants g µ make up the Lorentzian spectral density [35,31]. Assuming weak coupling, the use of a second-order Nakajima-Zwanzig master equation [41,42,35] is justified. This reads -where the interaction picture Hamiltonian is The state of the environment and the trace over its degrees of freedom are denoted by B and tr B , respectively.

Probe readout
Before the probe is interrogated, it will need to undergo a pre-measurement stage, consisting of sequence of three unitaries: First, each atom will be rotated by an angle ζ 1 via U ⊗n ζ 1 . Then, a CNOT transformation and the generalised Hadamard gate will be sequentially applied [see figure 1(c)]. An energy measurement can then be performed on the probe in order to build the frequency estimate. As we shall argue in section 2.4 below, in the limit R 1, the angles (ζ 1 , ζ 2 ) may be chosen so that the statistical uncertainty of the resulting estimate is (nearly) minimal.
Let us thus obtain the probabilities associated with an energy measurement on the final state of the probe. The state after U ⊗n−1 ζ 1 and the CNOT transformation reads where φ ≡ ωt + ζ 1 , i.e. the action of U ⊗n−1 ζ 1 amounts to replacing ϕ → ϕ + ζ 1 in (17). It will be more convenient to cast ρ ρ ρ 5 in an alternative form. To that end, note Generalising to an arbitrary power l yields where x l stands for the l-digit binary representation of x and h(x) denotes the number of non-zero digits in x l (i.e. its Hamming weight). In turn,x l represents the bitwise negation of x l . Care must be taken not to confuse the scalar function h(·) with the single-atom Hamiltonian h h h, nor the bitwise negationx l with the map¯ ¯ ¯ = σ σ σ x σ σ σ x . (20) by making the replacements − → , α − → β − , and α − → β , respectively, while σ σ σ ⊗l x |x l = |x l , and

Quantities such as Λ[¯ ¯ ¯ ] ⊗l , Λ[ ], and Λ[¯ ¯ ¯ ] follow from equation
Putting together all the above and dropping the sub-indices l = n−1 in the interest of a lighter notation yields with the definitions Similarly, the final state of the protocol [i.e. ρ ρ ρ 6 Therefore, a measurement of ρ ρ ρ 6 in the energy basis {|0 ⊗|x , |1 ⊗|x } has the following associated probabilities and where all eigenvectors with the same number of 1s [i.e. h(x)] on the register yield the same probability. Equation (26) will be used below to obtain a saturable lower bound on the mean squared error of the resulting frequency estimate. We now look into the energetic cost of the pre-measurement stage E meas = E (ρ ρ ρ 6 )−E (ρ ρ ρ 4 ). Let us re-write the system Hamiltonian in the same notation as equations (22) and (24). That is, where the sub-indices m indicate the Hamming weight m = h(x) of the argument x of the corresponding coefficients, i.e. c x and f x . At our optimal prescription (ζ 1 , ζ 2 ) the pre-measurement energetic cost is always positive E meas > 0. Note that we are deliberately leaving the projective part of the measurement out of our energetic bookkeeping. In some setups such as nuclear magnetic resonance, this could be justified, as projective measurements are mimicked by suitable rotations followed by free decay. In other cases it may be necessary to supplement E meas with a 'projection cost' E proj . Similarly, depending on the specific projection model, the sharp probabilities in equation (26) might need to be modified-a 'measurement apparatus' at some finite temperature would arguably introduce thermally distributed random bit flips during the readout, thus making the measurement noisy. Neither the potential extra cost nor the errors in the interrogation would qualitatively affect our results.
While very general models of projective measurement schemes, and thermodynamic analyses thereof, may be found in the literature (see e.g. references [43,44,45,46,47,48,49], just to mention some), it is not our intention to make generic statements about the energy efficiency of frequency estimation. Instead, we settle for showing how looking at the energetic aspect of parameter estimation in a specific example can in fact change dramatically the usual notions of metrological optimality.  When evaluating these derivatives, one must bear in mind that R = γ 0 λ does depend on ω, as = tanh ( ω 2T ). However, in our model F ω (H H H) may be well approximated by taking R and as constants, in the limit Rλ 1. That is, For even n, the measurement setting (ζ 1 , ζ 2 ) = ( π 2 −ωt, π 2 ) maximises F ω (H H H), while for odd n, one needs to choose (ζ 1 , ζ 2 ) = ( π 2 −ωt, 0). Note thatω should not be thoughtof as a variable, but as the best available estimate of the atomic frequency at any given stage. As the knowledge about ω is refined, the value ofω should be updated, and the measurement setting, adaptively modified. Although it may seem counter-intuitive, undoing the precession U ⊗n ωt on all atoms after the free evolution, improves the sensitivity to small fluctuations of ω around its averageω and thus, helps to reduce δω.  [51,52]. This can be computed from the state ρ ρ ρ 4 right after the free evolution stage or, equivalently, from ρ ρ ρ 5 , as F ω is invariant under unitary transformations. The QFI is [53] F ω = 4 (c) Figure 3. (a) Efficiency η T (t , n) = F ω /t at the optimal interrogation time t as a function of the probe size n, in the standard frequency-estimation scenario of limited time T . Note from the inset that, in spite of the fact that the probe is prepared in a mixed GHZ-diagonal state, the efficiency grows super-extensively, asη(t , n) ∼ n 3/2 , which corresponds to Zeno scaling. (b) Energy-efficiency η E (t , n) = F ω /(E init +E meas ) at the optimal interrogation time t as a function of the probe size n for the same parameters as (a). In this case, one roughly has η E (t , n) ∼ n −1/3 , i.e. from an energetic perspective, using large entangled probes yields no metrological advantage. In (c), we set n = 2 and investigate how η E at t decays as λ grows; that is, in our model, longer memory times yield more energy-efficient frequency estimation than purely Markovian dissipation. All parameters are the same as in figure 2.
where ν ± x and |Ξ ± x are the eigenvalues and eigenvectors of ρ ρ ρ 5 . Specifically, these are where ∆ x ≡ (a x − b x ) 2 + 4c 2 x . Once again, we place ourselves in the limit of small Rλ, and find that Ξ ± x | ∂ ω ρ ρ ρ 5 |Ξ ∓ x = 0, and thus which exactly coincides with the maximum of equation (31). Therefore, our proposed measurement setting is indeed optimal for Rλ 1. For arbitrary Rλ, however, F ω can be significantly larger than its limiting value (34). It may even be impossible to find a pair (ζ 1 , ζ 2 ) so that F ω (H H H) = F ω . Nevertheless, the exact F ω (H H H) always coincides with (34) at ζ 1 = π 2 −ωt and ζ 2 = { π 2 , 0}, even when this measurement setting is sub-optimal. This point is illustrated in figure 2(a).

Results and discussion
Recall that, in our scheme, the number of data points M that enters the inequality δω ≥ 1/ M F ω (H H H) is limited by the available energy E as M = E /(E init + E meas ). We can thus define the energy efficiency Note that we use F ω (H H H) and F ω indistinctly since, for Rλ 1, the QFI becomes saturable with our optimal measurement prescriptions.
We will proceed to maximise η E (t, n) in two steps: First, for given n, we shall find the optimal interrogation time t . Then, we will look at the scaling of η E (t , n) with the probe size. From equations (6), (28), (29), and (34), t can be found numerically. As shown in figure 2(b) it has a power-law-like dependence on the probe size ωt ∝ n −c , where c 1 (for Rλ 1). Let us place ourselves in the standard scenario, in which the total time T is the scarce resource to 'economise' on. As usual, we shall work in the limit Rλ 1 and denote the corresponding optimal sampling time by t , respectively. In figure 3(a) we illustrate that η T (t , n) can scale super-extensively under our time-inhomogeneous dissipative dynamics-even if we start from (mixed) thermal probes. Specifically, we recover the Zeno scaling (δω) 2 ∼ 1/n 3/2 [19,18].
What figure 3(a) suggests is that, if a large number N of two-level atoms were available, it would be sensible to batch them together in an entangled GHZ-diagonal state and partition the available running time T into prepare-and-measure segments of length t -the larger the probe, the better the resulting estimate.
In contrast, figure 3(b) tells a completely different story: When adopting an entangled GHZ-diagonal preparation, the efficiency η E (t , n) decreases rapidly as the probe is scaled up in size (in this case η E (t , n) ∼ n −1/3 , although the exponent is nonuniversal). This is so because, while (E init + E meas ) ∼ n, the QFI exhibits a slower power-law-like growth. Hence, if there was a cap on the total available energy E , one could produce a more accurate frequency estimate by manipulating the uncorrelated atoms locally rather than attempting to build such an 'expensive' entangled state. Our numerics show that this qualitative behaviour persists even if we move away from the regime of Rλ 1 and search for the measurement setting (ζ 1 , ζ 2 ) and interrogation time t which jointly maximise η E (t, Another natural question to ask in this setting is whether the environmental memory time plays any role in the energy efficiency of frequency estimation. In figure  3(c) we illustrate how η E (t , n) decays with λ at any given n. Recall from equation (7) that increasing λ corresponds to reducing the bath memory time, thus making the dissipation 'more Markovian'. Our setting thus showcases how memory effects in the dissipative dynamics can improve the performance of a specific parameter-estimation task. Elucidating whether memory effects play an instrumental role in energy-efficient frequency estimation requires a more general analysis that we defer for future work.

Conclusions
We have studied the problem of noisy frequency estimation when the total available energy E is limited. In each round of our estimation protocol, an ensemble of n initially thermal two-level atoms is brought into a GHZ-diagonal form by means of a simple sequence of qubit gates. We quantified the energetic cost of the preparation stage E init by looking at the ensuing increase in the average energy of the probe.
The system is then allowed to evolve freely under the effect of environmental noise. This is modelled by a phenomenological master equation with built-in memory effects, which gives rise to phase-covariant free dissipative dynamics.
After further qubit operations, an energy measurement is eventually performed on the probe. We showed that, in a suitable range of parameters, these operations can be chosen so as to globally minimise the statistical uncertainty of the final frequency estimate. We also provided the corresponding optimal measurement prescription explicitly. The cost associated with the (pre-)measurement stage E meas can also be readily calculated from the change in the average energy of the probe, thus allowing for a comprehensive energetic bookkeeping in each round of the protocol.
We introduced the notion of energy efficiency of the estimation η E = F ω (H H H)/(E init + E meas ) as a means to assess the overall performance of the estimation protocol when there is a cap on the total energy E . We further found the optimal free-evolution time t maximising η E (t , n), and noticed that preparing larger probes in entangled GHZdiagonal states is always detrimental for the energy efficiency of frequency estimation.
In the standard scenario, one assumes that the most restrictive constraint is instead the limited running time T of the estimation protocol and resorts to the figure of merit η T = F ω (H H H)/t. This grows monotonically with n when optimised over the free evolution time of the probe, thus suggesting that large multipartite entangled probes are, in principle, better. This is so because a figure of merit like η T fails to capture how 'difficult' or 'costly' it may be to prepare those states in practice. Incorporating the energetic dimension to the performance assessment through our η E may be the simplest way to quantitatively account for this 'difficultness'.
It is true that tracking the average energy changes of the probe may be a crude way of capturing the actual limitations in force in real metrological setups. Likewise, in many situations, the total time T might indeed place the most stringent limitation on the achievable precision, thus rendering other considerations irrelevant. Our observation merely highlights the importance of formulating quantifiers of the metrological efficiency that faithfully capture all the relevant constraints in place in each specific scenario.
We also showed that, at any probe size, η E (t , n) decays monotonically with the inverse bath memory time λ, hence suggesting that large bath correlation times might be a resource for energy-efficient frequency estimation. This point certainly deserves a deeper and more general investigation.
Our intended take-home message is that different assessments of resources lead to different notions of optimality. Hence, in order to produce practically useful metrological bounds, the stress should be placed on searching for those figures of merit capable of capturing the most stringent limitations at work in each experimental setup.
To conclude, it is important to remark that we did not optimise our energy efficiency over the initial state of the probe but rather, adopted the GHZ-diagonal preparation as a working assumption. The question of whether or not other forms of multipartite sharing of correlations could give rise to a more energetically favourable scaling remains open and certainly deserves further investigation.