Paper The following article is Open access

Valid lower bound for all estimators in quantum parameter estimation

and

Published 6 September 2016 © 2016 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Citation Jing Liu and Haidong Yuan 2016 New J. Phys. 18 093009 DOI 10.1088/1367-2630/18/9/093009

1367-2630/18/9/093009

Abstract

The widely used quantum Cramér–Rao bound (QCRB) sets a lower bound for the mean square error of unbiased estimators in quantum parameter estimation, however, in general QCRB is only tight in the asymptotical limit. With a limited number of measurements biased estimators can have a far better performance for which QCRB cannot calibrate. Here we introduce a valid lower bound for all estimators, either biased or unbiased, which can serve as standard of merit for all quantum parameter estimations.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

An important task in quantum metrology is to find out the ultimate achievable precision limit and design schemes to attain it. This turns out to be a hard task, and one often has to resort to various lower bounds to gauge the performance of heuristic approaches, such as the quantum Cramér–Rao bound [14], the quantum Ziv–Zakai bound [5], quantum measurement bounds [6] and Weiss–Weinstein family of error bounds [7]. Among these bounds the quantum Cramér–Rao bound (QCRB) is the most widely used lower bound for unbiased estimators [830]. However, with a limited number of measurements many practical estimators are usually biased. For example the minimum mean square error (MMSE) estimator, which is given by the posterior mean $\hat{x}(y)=\int p(x| y){x}{\rm{d}}{x}$ [34], is in general biased in the finite regime, here x denotes the parameter and y denotes measurement results, the posterior probability distribution $p(x| y)$ can be obtained by the Bayes' rule $p(x| y)=\tfrac{p(y| x)p(x)}{\int p(y| x)p(x){\rm{d}}{x}},$ with $p(x)$ as the prior distribution of x and $p(y| x)={\Tr }({\rho }_{x}{M}_{y})$ given by the Born's rule. The MMSE estimator provides the minimum mean square error

Equation (1)

The performance of this estimator, however, cannot be calibrated by quantum Cramér–Rao bound in the finite regime as with limited number of measurements it is usually biased. This is also the case for many other estimators including the commonly used maximum likelihood estimator [2730].

In this paper we derive an optimal biased bound (OBB) which sets a valid lower bound for all estimators in quantum parameter estimation, either biased or unbiased. This bound works for arbitrary number of measurements, thus can be used to gauge the performances of all estimators in quantum parameter estimation. And the difference between this bound and the quantum Cramér–Rao bound also provides a way to gauge when quantum Cramér–Rao bound can be safely used, i.e., it provides a way to gauge the number of measurements needed for entering the asymptotical regime that the quantum Cramér–Rao bound works. The classical optimal biased bound has been used in classical signal processing [35, 36].

2. Main result

Based on different assumptions there exists different ways of deriving lower bounds, for example some Bayesian quantum Cramér–Rao bounds, which are based on a quantum type of  Van Tree inequality, have been obtained [3133]. These bounds require the differentiability of the prior distribution at the boundary of the support region, thus may not apply, for example, to the uniform prior distribution. The optimal biased bound does not require the differentiability of the prior distribution at the boundary, thus can be applied more broadly. For the completeness, we will first follow the treatment of Helstrom [1] to derive a lower bound for estimators with a fixed bias, from which we then derive a valid lower bound for all estimators by optimizing the bias.

We consider the general case of estimating a function $f(x)$ for the interested parameter x with a given prior distribution. To make any estimation, one needs to first perform some measurements on the state ${\rho }_{x}$, which are generally described by a set of Positive Operator Valued Measurements (POVM), denoted as $\{{{\rm{\Pi }}}_{y}\}$. The measurements have probabilistic outcomes y with probability $p(y| x)=\Tr ({{\rm{\Pi }}}_{y}{\rho }_{x})$. An estimator $\hat{f}(y)$, based on the measurement results y, has a mean $E(\hat{f}(y)| x)=\int \hat{f}(y)\Tr ({\rho }_{x}{{\rm{\Pi }}}_{y}){\rm{d}}{y}=f(x)+b(x),$ where $b(x)$ represents the bias of the estimation. This equation can be written in another form

Equation (2)

where we use $E(x)$ as a short notation for $E(\hat{f}(y)| x)$ which equals to $f(x)+b(x)$ and only depends on x. Assuming the prior distribution is given by $p(x)$, the mean square error is then in the form

Equation (3)

where $\delta {\hat{f}}^{2}=\int {(\hat{f}(y)-E(x))}^{2}\Tr ({\rho }_{x}{{\rm{\Pi }}}_{y}){\rm{d}}{y}$ is the variance of $\hat{f}(y)$.

Differentiating equation (2) with respect to x and use the fact that $\int E^{\prime} (x)\Tr ({\rho }_{x}{{\rm{\Pi }}}_{y}){\rm{d}}{y}=E^{\prime} (x)$, with $E^{\prime} (x):= \partial E/\partial x$ we get

Equation (4)

Now multiply $p(x)$ at both sides of equation (4) and substitute the following equation into it

Equation (5)

here L is known as the symmetric logarithmic derivative of ${\rho }_{x}$ which is the solution to equation (5). We then obtain

Equation (6)

where $\mathrm{Re}(\cdot )$ represents the real part. Multiply both sides again with a real function $z(x)$ then integrate with respect to x,

Equation (7)

Now we denote $A=\sqrt{p(x)}(\hat{f}(y)-E(x))\sqrt{{\rho }_{x}{{\rm{\Pi }}}_{y}}$ and $B=\sqrt{p(x)}z(x)\sqrt{{\rho }_{x}}L\sqrt{{{\rm{\Pi }}}_{y}}$, then the left side of above equation can be rewritten as $\mathrm{Re}\int {\rm{d}}{x}\int \Tr ({A}^{\dagger }B){\rm{d}}{y}$. Therefore, equation (7) now has the form

Equation (8)

Using Schwarz inequality we have

the last equality we used the fact that

Equation (9)

and

Equation (10)

Here $J({\rho }_{x})=\Tr ({\rho }_{x}{L}^{2})$ is the quantum Fisher information [1, 2]. Based on above equations, we can obtain

Equation (11)

which is valid for any $z(x)$ that satisfies the inequality $\int p(x){z}^{2}(x)J({\rho }_{x})\,{\rm{d}}{x}\gt 0$. Assuming $J({\rho }_{x})$ is complete positive, i.e., $J({\rho }_{x})\gt 0$, let $z(x)=E^{\prime} (x)/J({\rho }_{x})$ we obtain

Equation (12)

From equation (3) we then get the lower bound for the mean square error

Equation (13)

When $b(x)=0$, i.e., for unbiased estimators the bound reduces to a Bayesian Cramér–Rao bound [31] (another Bayesian QCRB using a left logarithmic derivative is in [32]). Furthermore, if $f(x)=x$, the bound reduces to the well-used Cramér–Rao form [3]. If we only consider $f(x)=x$ and take the prior distribution as a uniform one, above bound can be treated as the quantum version of the biased Cramér–Rao bound [1]. The bound given in equation (13) vividly displays the tradeoff  between the variance and the bias of the estimate: at one extreme by letting $b(x)=0$ the unbiased estimates minimize the term ${b}^{2}(x)$, while the first term is fixed; at the other extreme by letting $b(x)=-f(x)$ we can minimize the the first term, but now with a fixed bias ${b}^{2}(x)={f}^{2}(x)$. The actual minimum of this bound lies somewhere between these two extremes, which provides a lower bound for all estimators.

To obtain a valid lower bound for all estimators we use the variational principle to find the optimal $b(x)$ that minimizes the bound in equation (13) which follows the treatment in Ref. [36]. Suppose the support of the prior distribution $p(x)$ is in $({a}_{1},{a}_{2})$, i.e., $p(x)=0$ for any x outside $({a}_{1},{a}_{2})$. Denote $G(b,x)=p(x)\{{[f^{\prime} (x)+b^{\prime} (x)]}^{2}/J({\rho }_{x})+{b}^{2}(x)\},$ and using variation of calculus, the optimal $b(x)$ that minimizes ${\int }_{{a}_{1}}^{{a}_{2}}G(b,x){\rm{d}}{x}$ should satisfy the Euler–Lagrange equation

Equation (14)

with the Neumann boundary condition ${\left.\tfrac{\partial G}{\partial b^{\prime} }\right|}_{x={a}_{1}}={\left.\tfrac{\partial G}{\partial b^{\prime} }\right|}_{x={a}_{2}}=0$. Substituting the expression of $G(b,x)$ into the equation, one can obtain

Equation (15)

which gives the following differential equation for the optimal $b(x)$

Equation (16)

which can be reorganized and written compactly as

Equation (17)

with boundary conditions $b^{\prime} ({a}_{1})=-f^{\prime} ({a}_{1})$ and $b^{\prime} ({a}_{2})=-f^{\prime} ({a}_{2})$. Note that the obtained solution of $b(x)$ may not correspond to an actual bias of an estimator, it is just used as a tool to get the lower bound [35]. The optimal bias $b(x)$ can then be obtained by solving this equation, either numerically or analytically. Next, substituting it back to equation (13), one can get a valid lower bound for all estimates.

If the prior distribution $p(x)$ and the quantum Fisher information $J({\rho }_{x})$ are independent of x, then the equation simplifies to

Equation (18)

which can be analytically solved. For example consider a uniform prior distribution on $(0,a)$, and we would like to estimate the unknown parameter itself, i.e., $f(x)=x$. In this case we can obtain an analytical solution for the optimal bias

Equation (19)

Substituting it back to the right side of the inequality (13), we obtain a valid lower bound for all estimates

Equation (20)

Compare to the quantum Cramér–Rao bound, this bound has an extra term which is then always lower.

3. Examples

In this section, we give four examples for the valid lower bound. In the first three examples, the QFI is independent of the parameter under estimation. In these examples, taking the prior distribution as uniform, the MSE can be directly obtained via equation (20). However, in some cases, the QFI is actually dependent on the estimated parameter. The fourth example is such a case. In this example, the optimal bias has to be solved via equation (17).

Example 1. As the first example, we consider $N$ spins in the NOON state, $(| 00\cdots 0\rangle +| 11\cdots 1\rangle )/\sqrt{2},$ which evolves under the dynamics $U(x)={({{\rm{e}}}^{-{\rm{i}}{\sigma }_{3}{xt}/2})}^{\otimes N}$ (same unitary evolution ${{\rm{e}}}^{-{\rm{i}}{\sigma }_{3}{xt}/2}$ acts on each of the $N$ spins) with ${\sigma }_{1}=| 0\rangle \langle 1| +| 1\rangle 0| $, ${\sigma }_{y}=-{\rm{i}}| 0\rangle \langle 1| +{\rm{i}}| 1\rangle \langle 0| $ and ${\sigma }_{3}=| 0\rangle \langle 0| -| 1\rangle \langle 1| $ as Pauli matrices. After $t$ units of time it evolves to

Equation (21)

We can take the time as a unit, i.e., $t=1$. This NOON state has the quantum Fisher information $J={N}^{2}$ [14]. For $n$ times repeated measurements, the quantum Fisher information is ${{nN}}^{2}$. If the prior distribution $p(x)$ is uniform on $(0,a)$, then from equation (20), we have

Equation (22)

We will compare these bounds with an actual estimation procedure using the MMSE estimator. Consider the measurements in the basis of $| {\psi }_{0}\rangle =(| 00\cdots 0\rangle +| 11\cdots 1\rangle )/\sqrt{2}$ and $| {\psi }_{1}\rangle =(| 00\cdots 0\rangle -| 11\cdots 1\rangle )/\sqrt{2},$ which has the measurement results 0 and 1 with probability distribution ${p}_{0}=| \langle {\psi }_{0}| {\psi }_{x}\rangle {| }^{2}={\cos }^{2}({Nx}/2)$ and ${p}_{1}=1-{p}_{0}={\sin }^{2}({Nx}/2)$. Assuming the measurement is repeated n times, the probability that has k outcomes as 1 is given by

Equation (23)

where $\left(\genfrac{}{}{0em}{}{n}{k}\right)$ is the binomial coefficient. From which we can then obtain the MMSE estimator as explained in the introduction.

To compare the QCRB, MMSE and OBB with the mean square error of this procedure, we plot these three quantities as functions of measurement number n in figure 1. The solid red, dashed blue lines and black dots in this figure represent the mean square error for the MMSE estimator, the QCRB and the OBB, respectively. From which we can see that while QCRB fails to calibrate the performance of the MMSE estimator, the optimal biased bound provides a valid lower bound. And from the closeness between the MMSE estimator and the optimal biased bound, one can gauge that in this case the MMSE estimator is almost optimal. The bias for the MMSE estimator is also plotted in figure 2. It can be seen that when n is small, the MMSE estimator is indeed biased, for this reason the QCRB fails to calibrate the performance, while when n gets larger, the estimator becomes more unbiased, indicating a transition into the asymptotical regime where the QCRB starts to be valid.

Figure 1.

Figure 1. Mean square error for the minimum mean square error estimator (MMSE, solid red line, equation (1)), optimal biased bound (OBB, black dots, equation (22)) and quantum Cramér–Rao bound (QCRB, dashed blue line) with different number of repeated measurements n. Here we consider a NOON state of N = 10 particles. The prior distribution $p(x)$ is taken as the uniform distribution on $(0,\pi /10)$.

Standard image High-resolution image
Figure 2.

Figure 2. Bias for posterior mean in minimum mean square error estimator for different number of measurements. n = 1: dotted green line; n = 2: dash black line; n = 3: dash-dotted red line; n = 15: solid blue line; n = 20: yellow triangulars. Here we consider a NOON state of N = 10 particles. The prior distribution $p(x)$ is taken as the uniform distribution on $(0,\pi /10)$.

Standard image High-resolution image

Example 2. We consider a qubit undergoing an evolution with dephasing noise. The master equation for the density matrix ρ of the qubit is

Equation (24)

where γ is the decay rate and $x$ is the parameter under estimation. Take the initial state as $| {\psi }_{0}\rangle =(| 0\rangle +| 1\rangle )/\sqrt{2}$, then after time $t$, which we normalize to 1, the evolved state reads

Equation (25)

where $\eta =\exp (-\gamma )$. The quantum Fisher information in this case is given by $J={\eta }^{2}$. The quantum Cramér–Rao bound for $n$ repeated measurements then gives

Equation (26)

For the optimal biased bound we again takes the prior distribution $p(x)$ as uniform on $(0,\pi )$. Based on equation (20), one can get the optimal biased bound as

Equation (27)

We also use this bound to gauge the performance of a measurement scheme, which measures in the basis of $| {\psi }_{0}\rangle =(| 0\rangle +| 1\rangle )/\sqrt{2}$ and $| {\psi }_{1}\rangle =(| 0\rangle -| 1\rangle )/\sqrt{2}$. The distributions of the measurement results are given by

Equation (28)

Equation (29)

The probability that has k outcomes as 1 among n repeated measurements is $p(k| x)=\left(\genfrac{}{}{0em}{}{n}{k}\right){p}^{k}(1| x){p}^{n-k}(0| x).$ Again using the minimum mean square error estimator, which is given by the posterior mean $\hat{x}(k)=\int p(x| k){x}{\rm{d}}{x}$, we can get the mean square error via equation (1). In figure 3, we plotted the mean square error for the MMSE estimator, the optimal biased bound and quantum Cramér–Rao bound at different strength of dephasing noise. It can be seen that while the quantum Carmér–Rao bound fails to provide a valid lower bound, the optimal biased bound provides pretty tight bound at all ranges of dephasing noise, which indicates that the MMSE estimator is close to be optimal even at the presence of dephasing noises.

Figure 3.

Figure 3. Mean square error for the minimum mean square error estimator (MMSE, solid red line, equation (1)), optimal biased bound (OBB, dash-dotted black line, equation (27)) and quantum Cramér–Rao bound (QCRB, dashed blue line) for a qubit at different rate of dephasing noise η, with the measurements number n = 5. The prior distribution is taken as the uniform distribution on $(0,\pi )$.

Standard image High-resolution image

Example 3. In this example, we consider a SU(2) interferometer described via a unitary transformation $\exp (-{{\rm{i}}{x}{S}}_{2})$. Here ${S}_{2}$ is a Schwinger operator defined as ${S}_{2}=\tfrac{1}{2i}({a}^{\dagger }b-{b}^{\dagger }a)$ with $a({a}^{\dagger })$, $b({b}^{\dagger })$ the annihilation (creation) operators for ports A and B. $x$ is the parameter under estimation. Now we take the import state as a coherent state $| \beta \rangle $ for port A and a cat state ${{ \mathcal N }}_{\alpha }(| \alpha \rangle +| -\alpha \rangle )$ for port B. Here ${{ \mathcal N }}_{\alpha }^{2}=1/(2+2{{\rm{e}}}^{-2| \alpha {| }^{2}})$ is the normalization number. Taking into account the phase-matching condition, the quantum Fisher information for $x$ in this case is in the form [37]

Equation (30)

where ${n}_{{\rm{A}}}=| \beta {| }^{2}$ and ${n}_{{\rm{B}}}=| \alpha {| }^{2}\tanh | \alpha {| }^{2}$ are photon numbers in port A and B. Based on above expression, the quantum Fisher information $J$ is independent of $x$. Thus, for the optimal biased estimation, the mean square error $\mathrm{MSE}(\hat{x})$ satisfies equation (20). The maximum Fisher information with respect to ${n}_{{\rm{A}}}$ and ${n}_{{\rm{B}}}$ for a fixed yet large total photon number in this case can be achieved when photon numbers for both ports are equal, which is ${J}_{{\rm{m}}}={N}^{2}+N$ [37], with $N$ the total photon number in the interferometer. Using the optimal biased bound and taking the prior distribution as uniform on $(0,a)$, for $n$ times repeated measurements, $\mathrm{MSE}(\hat{x})$ then satisfies

Equation (31)

Figure 4 shows the quantum Cramér–Rao bound (dashed blue line), the optimal biased bound (dash-dotted black line) and the minimum mean square error for the MMSE estimator (solid red line). The prior distribution taken as uniform in $(0,\pi /5)$. In this figure, ${n}_{{\rm{A}}}={n}_{{\rm{B}}}=1$. For the MMSE estimator, we measure along the state $| 11\rangle $. We can see that the optimal biased bound provides a valid lower bound at all range of n, however the gap between the mean square error of the MMSE estimator and the bound indicates that the measurement along the state $| 11\rangle $ may not be optimal.

Figure 4.

Figure 4. Optimal biased bound (OBB, solid red line, equation (31)), quantum Cramér–Rao bound (QCRB, dashed blue line), the minimum mean square error for the MMSE estimator(MMSE, solid red line) for the phase estimation in the interferometer. Here we consider a SU(2) interferometer with ${n}_{{\rm{A}}}={n}_{{\rm{B}}}=1$. The prior distribution is uniform in $(0,\pi /5)$.

Standard image High-resolution image

Example 4. The quantum Fisher information in above examples is independent of the estimating parameter $x$. We give another example with the quantum Fisher information depending on $x$.

Consider a qubit system with the Hamiltonian

Equation (32)

which describes the dynamics of a qubit under a magnetic field in the XZ plane, the interested parameter denotes the direction of the magnetic field. The quantum Fisher information of this system has been recently studied with various methods [3840]. For a pure initial state $(| 0\rangle +| 1\rangle )/\sqrt{2}$, the quantum Fisher information is given by (with the evolution time normalized as t = 1)

Equation (33)

which depends on x. In this case, we have to solve equation (17). Like previous examples, we take the prior distribution $p(x)$ as uniform on $(0,\pi /2)$. If we take $B=\pi /2$, with n repeated measurements, $J=n(2-{\sin }^{2}x)$, then equation (17) reduces to

Equation (34)

This equation can be numerically solved and by substituting the obtained $b(x)$ into equation (13), the optimal biased bounds can be obtained which is plotted in figure 5.

Figure 5.

Figure 5. Mean square error for minimum mean square error estimator (MMSE, solid red line, equation (1)), the optimal biased bound (OBB, dash-dotted black line) and quantum Cramér–Rao bound (QCRB, dashed blue line) as a function of measurement number n. Here we consider a qubit under a magnetic field in the XZ plane. The prior distribution is taken as uniform in $(0,\pi /2)$.

Standard image High-resolution image

Again we use this bound to gauge the performance of a measurement scheme which takes measurements along $| {\psi }_{0}\rangle =(| 0\rangle +| 1\rangle )/\sqrt{2}$ and $| {\psi }_{1}\rangle =(| 0\rangle -| 1\rangle )/\sqrt{2}$. The probability distribution of the measurement results are given by

Equation (35)

and $p(0| x)=1-p(1| x)$. When B equals to $\pi /2$, above probability reduces to $p(1| x)=({\sin }^{2}x)/2$. The probability of having k outcomes as 1 among n repeated measurements is $p(k| x)=\left(\genfrac{}{}{0em}{}{n}{k}\right){p}^{k}(1| x){p}^{n-k}(0| x).$ Using the posterior mean as the estimator, we can obtain the mean square error for the MMSE estimator which is also plotted in figure 5. From this figure, one can again see that while the quantum Cramér–Rao bound (dashed blue line) fails to gauge the performance of the MMSE estimator (solid red line), the optimal biased bound (dash-dotted black line) provides a valid lower bound and from the closeness between the mean square error of the MMSE estimator and the optimal biased bound, one can tell that the MMSE estimator is a good estimator here.

4. Summary

The optimal biased bound provides a valid lower bound for all estimators, either biased or unbiased. It can thus be used to calibrate the performance of all estimators in quantum parameter estimation. Asymptotically the widely used quantum Cramér–Rao bound provides a lower bound for quantum parameter estimation, however in practice the number of measurements are often constrained by resources, and it is hard to tell when quantum Cramér–Rao bound applies. From the difference between the optimal biased bound and quantum Cramér–Rao bound it also provides a way to estimate the number of measurements needed to enter the asymptotical regime.

Acknowledgements

The work was supported by CUHK Direct Grant 4055042.

Please wait… references are loading.
10.1088/1367-2630/18/9/093009