Statistically efficient tomography of low rank states with incomplete measurements

The construction of physically relevant low dimensional state models, and the design of appropriate measurements are key issues in tackling quantum state tomography for large dimensional systems. We consider the statistical problem of estimating low rank states in the set-up of multiple ions tomography, and investigate how the estimation error behaves with a reduction in the number of measurement settings, compared with the standard ion tomography setup. We present extensive simulation results showing that the error is robust with respect to the choice of states of a given rank, the random selection of settings, and that the number of settings can be significantly reduced with only a negligible increase in error. We present an argument to explain these findings based on a concentration inequality for the Fisher information matrix. In the more general setup of random basis measurements we use this argument to show that for certain rank $r$ states it suffices to measure in $O(r\log d)$ bases to achieve the average Fisher information over all bases. We present numerical evidence for states upto 8 atoms, supporting a conjecture on a lower bound for the Fisher information which, if true, would imply a similar behaviour in the case of Pauli bases. The relation to similar problems in compressed sensing is also discussed.


Introduction
Recent years have witnessed great experimental progress in the study and control of individual quantum systems [1,2]. A common feature of many experiments is the use of quantum state tomography (QST) methods as a key tool for validating the results [3,4]. The aim of QST is to statistically reconstruct an unknown state from the outcomes of repeated measurements performed on identical copies of the state. Among the proposed estimation methods we mention, e.g. variations of maximum likelihood [5,6,7,8,9], linear inversion [10], Bayesian inference [11,12], estimation with incomplete measurements [13,14,15], and continuous variables tomography [16]. However, for composite systems such as trapped ions, full state tomography becomes challenging due to the exponential increase in dimension [17]. Therefore, there has been a significant interest in developing tomography methods that are efficient for certain lower dimensional families of physically relevant states. For instance, the estimation of low rank states has been considered in the context of compressed sensing (CS) [18,19,20,21,22], model selection [23], and spectral thresholding [24,25]. The estimation of the permutationally invariant part of the density matrix as an approximation to the true state is also relevant in certain physical models [26,27,28]. Similarly, the estimation of matrix product states [29] is particularly relevant for many-body systems, but also for estimating dynamical parameters of open systems [30,31]. In this paper we build on the fruitful CS idea that the sparsity of low rank states can be exploited in order to identify and estimate the state with a reduced number of 'measurements', in contrast to standard, informationally complete QST. Recall that a rank-r joint state of n qubits can be characterised by O(rd) parameters, where d = 2 n is the dimension of the associated Hilbert space. In the original CS proposal it is shown that such a state can be recovered from the expectation values of O(rd log d) randomly chosen Pauli observables. More recent work concentrates on error bounds [20,18] and confidence intervals [21] of CS estimators. However, from a statistical and experimental viewpoint the estimation based on Pauli expectations does not make the most efficient use of the measurement data available in ion trap experiments. Indeed, the Pauli expectations can be seen as 'coarse grained' statistics of the 'raw data' which consists of counts for individual outcomes of a measurement in an orthonormal basis. This coarse graining leads to loss of information and a significant increase in estimation error, as shown in section 5. In contrast, here we consider the statistical problem of estimating low rank states in the set-up of multiple ions tomography (MIT), where the input is the counts dataset provided by the experiment. The goal is to investigate the possibility of using a reduced number of measurement settings (Pauli bases), without a significant loss of statistical accuracy, in comparison to standard, full settings MIT. For this, we consider the behaviour of the mean square error (MSE) with respect to the Frobenius distance between the true state and the estimator E ρ − ρ 2 2 , in the limit of large number of measurement samples. According to asymptotic theory [32], in this regime the MSE of efficient estimators (e.g. maximum likelihood)ρ takes the following expression Above, I(ρ|S) is the classical Fisher information associated with the chosen measurement design S and a local parametrisation of rank-r states, and G is the positive weight matrix associated with the quadratic approximation of the Frobenius distance in the local parameters.
In the following section we review the MIT set-up, and formulate the 'reduced settings hypothesis' in statistical terms. After this, we present the results of extensive numerical simulations testing this hypothesis, which are summarised in Figure 1. We find that the asymptotic MSE given by (1) is very robust with respect to a reduction the number of settings, with a choice of random settings making up the measurement design S. For instance, 4 ions states of rank 3 can be estimated by using 20 settings (out of a total of 81) with a negligible increase in estimation error. Also, to test the validity of the asymptotic theory for low rank states, we compared the theoretical prediction (1) with the actual MSE of the maximum likelihood estimator and found a very good agreement for m = 100 repetitions per setting, a typical value used in experiments [3]. To explain the observed robustness, we outline an argument based on a concentration inequality [33] for the Fisher information matrix of an experiment with randomly chosen Pauli settings. Transforming the argument into a mathematical proof requires control over certain spectral properties of the Fisher information matrix, and remains an open problem. However, by relaxing the Pauli measurement setup, and allowing for measurements with respect to random bases, we can prove that states of rank r can be estimated by using O(r log d) settings, with only a small increase in the MSE, relative to the setup in which a large number of settings is probed, cf. Theorem 4.1. For Pauli measurements we present numerical evidence on the lowest eigenvalue of the Fisher information matrix, supporting the conjecture that the MSE of low rank states concentrates for a small number of measurement settings. From a CS viewpoint, our question is closely related to the work [34,35] inspired by the PhaseLift problem [36,37] which considers the case where the incomplete 'measurements' are expectations of rank-one projections sampled randomly from a Gaussian distribution, or a projective t-design. In [38] the analysis is extended to the physically relevant case of random orthonormal basis measurements, and it is shown that rank-r states become identifiable with a large probability for only O(r log 3 d) 'sufficiently random' measurements. These results are in broad agreement with our findings, calling for a better understanding of the connections between the CS estimators and statistical approaches considered here.

Multiple Ions Tomography with Incomplete Measurements
In this paper we consider the multiple ions tomographic (MIT) setup as in the ion-trap experiments [3]. In MIT the goal is to statistically reconstruct the joint state of n ions (modelled as two-level systems), from counts data generated by performing a large number of measurements on identically prepared systems. The unknown state ρ is a d × d density matrix (complex, positive trace-one matrix) where d = 2 n is the dimension of the Hilbert space of n ions. The experimenter can measure an arbitrary Pauli observable σ x , σ y or σ z of each ion, simultaneously on all n ions. Thus, each measurement setting is labelled by a sequence s = (s 1 , . . . , s n ) ∈ {x, y, z} n out of 3 n possible choices. The measurement produces an outcome o = (o 1 , . . . , o n ) ∈ O n := {+1, −1} n , whose probability is where P s o is the one dimensional projection P s o = |λ s1 o1 λ s1 o1 | ⊗ . . . ⊗ |λ sn on λ sn on |, and, |λ s ± is an eigenvector of the Pauli matrix σ s , with a corresponding ±1 eigenvalue. The measurement procedure and statistical model can be summarised as follows. For each setting s the experimenter performs m repeated measurements and collects the counts of different outcomes N (o|s), so that the total number of quantum samples used is N := m×3 n . The resulting dataset is a 2 n × 3 n table whose columns are independent and contain all the counts in a given setting. The overall measurement is informationally complete, and the state can be estimated by using a number of methods proposed in the literature [8,25,24]. Now, there are several reasons to consider a set-up in which a smaller number of measurement settings are used for estimating the state; switching measurement settings may be more costly than repeating a measurement in the same setting, and smaller datasets may be easier to handle computationally. However, by removing even a single setting, the state becomes unidentifiable. This is because the corresponding tensor of Pauli operators is linearly independent from all the one dimensional projections of the remaining settings, and therefore its expectation value cannot be estimated. This can be remedied if some prior information about the state is available. The relevant example here is from compressed sensing [18,19,20,21,22] which shows that low rank states are uniquely determined by the Pauli expectations associated with a reduced number of settings. However, the existing compressed sensing literature does not address the statistical problem of estimating the state directly from the raw measurement data (i.e. the counts N (o|s)), as it typically employs coarse grained statistics such as Pauli expectations. Our goal is to investigate the statistical efficiency of estimating low rank states with reduced measurement settings. We will consider an asymptotic scenario in which the number m of measurement repetitions per setting is large and the mean square error can be characterised in terms of the classical Fisher information, as discussed above. As we show below this regime is already attained for m = 100, which is of the order of repetitions cycles used in experiments [3].
As stated above, we assume that the prepared state ρ belongs to the space of rank r states S r ⊂ M (C d ), for a fixed rank r < d. Since the asymptotic mean square error depends only on the local properties of the statistical model, it suffices to consider a parametrisation θ → ρ θ of a neighbourhood of ρ in S r , which can be chosen as follows. In its own eigenbasis ρ is the diagonal matrix of eigenvalues Diag(λ 1 , . . . , λ r , 0, . . . , 0), and any sufficiently close state is uniquely determined by its matrix elements in the first r rows (or columns). Intuitively this can be understood by noting that any rank-r state ρ in the neighbourhood of ρ can be obtained by perturbing the eigenvalues and performing a small rotation of the eigenbasis; in the first order of approximations these transformation leave the We therefore choose the (local) parametrisation ρ = ρ θ with where, in order to enforce a trace-one normalisation, we constrain the first diagonal matrix element to be ρ 1, where G a,b = Tr ∂ρ θ ∂θa · ∂ρ θ ∂θ b is a constant weight matrix whose expression can be found in the appendix, below equation (A.2). After fixing the parametrisation, we now define the statistical model of multiple ions tomography with incomplete settings. Let S ⊂ {x, y, z} n be a set of k randomly chosen settings, and consider the modified scenario in which ions prepared in the unknown state ρ are repeatedly measured m = N/k times for each setting in S, so that the overall number of quantum samples is always N . The classical Fisher information associated with a single chosen setting s is defined as For a set S of k settings the Fisher information matrix associated with a single measurement sample from each setting s ∈ S is given by the sum of the individual Fisher matrices I(ρ|s), and for later purposes we will denote the average I(ρ|S) = 1 k s∈S I(ρ|s). The individual matricies can be computed by using definition (4) together with equation (2) and the parametrisation (3). Since the outcomes from m repeated measurements in a setting s are i.i.d, when the number of repetitions m is sufficiently large, efficient estimators of θ (and hence of ρ) from these outcomes have an asymptotically Gaussian distribution [32] √ where the covariance matrix I(ρ|S) −1 is the Fisher information associated with a single measurement sample of the set S. From this behaviour and the local expansion of the Frobenius distance, we see that for (reasonably) large m, the mean square error of an efficient estimator (e.g. maximum likelihood) scales as The trace expression is a measure of the sensitivity of the chosen set of settings S at ρ. Since the settings are chosen randomly we need to study the fluctuations of Tr(I(ρ|S) −1 G). In the next section we present extensive simulation results which essentially show that one can significantly reduce the number of settings without affecting the MSE.

Numerical Simulations
In Figure 1 we plot the values of the asymptotic MSE Tr(I(ρ|S) −1 G)/N for various ranks, choices of 4 ions states, and choices of measurement designs S (sets of settings). For each rank r = 1, . . . , 5 we generated 10 states by using the Cholesky decomposition ρ = T * T , cf. [39]. For each state, the MSE values are calculated over a range of measurements with reduced settings, starting from the 'full' measurement with 3 n settings, as follows. For a given number of reduced measurements k, we generated 10 independent sets S of randomly chosen settings. For each pair (ρ, S) we evaluated the Fisher information I(ρ|S) and the weight matrix G in the parametrisation described above. In these simulations, the total number of copies of the state is kept constant as a resource. Therefore, a smaller number of measurement settings k leads to a larger number of repetitions m = N/k per setting. The simulations show that asymptotic risk for low rank states demonstrates only a gradual increase even over a significant reduction in the number of settings measured. For example, for states of rank 3, one can reduce the number of settings from 81 to 20 with a negligible increase in the MSE. Moreover, for a given k, the fluctuations of the MSE over choices of states and settings are rather small, showing the robustness of the procedure.
In the previous section we argued that the theoretical value (5) is close to the actual error of an efficient estimator, when the number of samples is reasonably large. To verify this we have computed the Maximum Likelihood (ML) estimator and studied its performance in this reduced measurement settings MIT setup. The ML estimator implemented is a modified form of the iterative RρR method in [8] -for the estimates generated at each iteration of the algorithm, only the r largest eigenvalues are retained. This modification ensures that the ML estimator has knowledge about the rank of the density matrix. The results of the comparison with the Fisher prediction (5) are shown in Figure 2, and show a very good agreement between the two. For a given random set S of k settings, the MSE of the MLE E ρ M L − ρ 2 2 is estimated by averaging the square error over 30 ML estimates. The relative error is then plotted for each choice of S as a single circle. On average, the relative error is of order of 5%. In conclusion, the simulations indicate that low rank states can be estimated with a significantly smaller number of measurement settings than the total of 3 n currently used in experiments, with a negligible loss of statistical accuracy.

A Concentration Bound for the MSE
Why is the MSE robust with respect to the reduction of the number of measured settings? In this section we provide an intuitive explanation based on a concentration bound for the asymptotic MSE, i.e. the random function S → Tr(I(ρ|S) −1 G). Analysing the observed MSE concentration for MIT with Pauli measurements is difficult due to the special, discrete set of bases which contribute to the average. Much like the problem of proving the the RIP property in compressed sensing [40,18], it is easier to analyse a more random set-up, namely one where the measurement bases making up the design S are drawn randomly from the uniform measure over orthonormal bases (ONB). We therefore begin by considering this general setup of random measurements and return to the Pauli measurements later in the section. Physically, this random setup could be implemented by first rotating the state ρ by a random unitary U , after which each atom is measured in the σ z eigenbasis. We therefore let S = {s 1 , . . . , s k } be the altered design with randomly, uniformly distributed measurement bases. Since the settings in S are independent, the Fisher information matrices I(ρ|s) are independent, and for k large enough the average information per setting approaches the mean information over all random settings Since we are interested in the behaviour of the MSE for the randomly chosen designs S, we look at the relative error RE(ρ|S) := Tr(I(ρ|S) −1 G)/Tr(Ī −1 G).
and would like to determine the number of settings k required for the MSE to be concentrated close the optimal value of Tr(Ī −1 G).
To investigate this MSE concentration for states of rank r in this setup, we focus our attention on states with equal eigenvalues, i.e. ρ 0 := Diag 1 r , . . . , 1 r , 0, . . . , 0 , with respect to its eigenbasis; due to the unitary symmetry of the random settings design, the eigenbasis can be chosen to be the standard basis. The reason for choosing this particular spectrum is that such states represent the 'least sparse' state of rank r. Indeed, rank-r states which have some eigenvalues close to zero can be approximated by states of lower ranks, and we expect that they require even smaller number of measurement settings. The following Theorem shows that in order to keep the relative error (7) close to 1 it suffices to take a number of random settings k which scales as O(r log(2rd)) with respect to rank and Hilbert space dimension. Taking into account that one setting provides d probabilities, the total number of expectations is of the order O(rd log(2rd)) which roughly agrees with the number of Pauli expectations required in compressed sensing. We will come back to this comparison in the following section. with probability 1 − δ, provided that the number of measurements performed is k = C(r + 1) log(2D/δ), with D = 2rd − r 2 − 1 the dimension of the space of rank-r states.
The proof of this theorem is detailed in the appendix, and uses a matrix Chernoff bound [33], to bound the deviation of G −1/2 I S G −1/2 from the mean G −1/2 IG −1/2 . This is then recast in terms of a bound on the MSE as in the theorem above. The two bounds show that with probability 1 − δ, the relative error RE(ρ 0 |S) is in the interval [1 − , 1 + ], so using design S induces at most an increase of MSE. Similar results can be derived along the lines of the proof, for states with arbitrary spectrum. Figure 3 illustrates this concentration in two ways; by plotting the relative error RE(ρ 0 |S) and by plotting the eigenvalues of G −1/2 I S G −1/2 , for various values of k. The concentration in the spectrum of the eigenvalues demonstrates the rate at which I S approximates the mean Fisher information I. We see that for pure states all eigenvalues concentrate around 1, this is because G −1/2 IG −1/2 is an identity matrix for pure states. While for ranks 2 and 3, this matrix is no longer identity and has eigenvalues that are either 1 or r/(r + 1). We see in the plots for these ranks that the lower band in the eigenvalues spectrum approaches the minimum eigenvalue of r/(r + 1), while the remaining eigenvalues concentrate around 1. The explicit form of the G −1/2 IG −1/2 matrix is detailed in the appendix. The above theorem guarantees that for a 4 ions pure state, the MSE is within 5% of the optimal, with a probability of failure δ = 0.1, provided that we measure k ≈ 7100 settings. However, the bottom-right plot in Figure 3 shows that the MSE concentrates much earlier, well within k = 100 settings. This indicates that studying the concentration of G −1/2 I S G −1/2 to bound the MSE provides a highly pessimistic estimate for k. Note however, that although the value k ≈ 7100 is much larger than the full set of measurements for a 4 ions state in the MIT setup, the theorem demonstrates a significant reduction in the number of settings needed when we consider larger states of n ≥ 9 ions.

Pauli settings
We now return to the more physical set-up in which the settings are chosen from the set {x, y, z} n of Pauli measurements. Figure 4 plots the error RE(ρ|S) of the MSEs for the reduced settings, relative to the MSE of the average information for full 3 n settings I = (3 n ) −1 s∈{x,y,z} n I(ρ|s). The numerical simulations show that even for k = 20 settings, the average MSE is only 5% higher than the MSE of the full settings experiment, while when the variance is taken into account, most MSEs are less that 10% higher. We note that in the simulations, the different settings making up the measurement design S are chosen without replacement, while an application of the concentration bound in the theorem would use a slightly altered setup in which the different settings are chosen independently and with equal probabilities (drawing with replacement). For a discussion on the relation between the two set-ups we refer to [41]. The key step in establishing a concentration bound as in Theorem 4.1 is to control the ratio between the largest maximum eigenvalue of G −1/2 I(ρ 0 |s)G −1/2 over all measurements and the minimum eigenvalue of G −1/2Ī G −1/2 . In the case of the uniformly distributed settings, I can be computed explicitly by using analytic expressions for moments of random unitaries [42], which gives λ min = r r+1 for r > 1, and λ min = 1 for pure states, while λ max can be upper bounded by using the inequality between the quantum and classical Fisher informations [43], as λ max ≤ 2r for r > 1 and λ max ≤ 4 for r = 1. Together these give a λmax λmin = 2(r+1) which determines the number of measurement settings k in Theorem 4.1. For the Pauli measurements set-up, the same upper bound holds for the maximum eigenvalue, but at the moment we do not have a similar lower bound for λ min (G −1/2Ī G −1/2 ), wherē I is the average Fisher information over Pauli settings. However, there is strong numerical evidence that the smallest eigenvalue of λ min (G −1/2Ī G −1/2 ) remains well bounded away from zero. Figure 5 plots the minimum eigenvalues for 100 states of 4 to 8 ions, over three different ranks. We notice that the minimum eigenvalue for each rank is well concentrated Figure 5. Boxplots of the minimum eigenvalues of G −1/2 IG −1/2 for the Pauli settings. For a given rank and ion number, we chose randomly 100 different states ρ 0 with r equal eigenvalues.
away from zero and for ranks r > 1 clearly demonstrates an increase with the dimension of the space. While the full dependence of λ min on r and d is unclear, we conjecture that for any fixed rank, λ min is larger than a fixed constant for all states of rank r, of arbitrarily many ions. If this was true, it would imply that a state of fixed rank r can be estimated efficiently with O(log d) settings. For now, as a step in the direction of proving the conjectured concentration as in Theorem 4.1 for reduced settings, we will prove a weaker result based on a rough lower bound for λ min . From Theorem 2 in [25] we have that for full 3 n settings, the MSE of an optimal estimatorρ is upper bounded as with C > 0 being an absolute constant. Asymptotically, the MSE is lower bounded by 1/N · Tr(I −1 G) which implies 1/λ min ≤ C rd log (2d). This gives us a rough lower bound on the minimum eigenvalue. Plugging this value into the concentration bound of Theorem 4.1 gives us that the minimum number of settings k scales as O(r 2 d log 2 (2D)), which despite being far from optimal, demonstrates a better scaling than the 3 n of the 'full settings' setup.

Coarse vs Fine Grained Models
As mentioned in the introduction, a similar reduction in the number of 'measurements' has been found in compressed sensing (CS) estimators [18,19,20,21,22], which use O(rd log d) expectations of Pauli operators to recover the unknown state. CS techniques provide computationally efficient estimators whose estimation errors scale optimally with the number of parameters and with the errors in the estimation of the Pauli expectations. However, from the statistical viewpoint the Pauli expectations are not the most efficient starting point in estimation, as they are 'coarse grained' statistics of the 'raw', or 'fine grained' measurement data given by the counts N (o|s). A single measurement in the 'coarse grained' model is defined by a Pauli observable x, y, z}, where σ 0 is the identity matrix. To compute its expectation one needs to measure σ b to obtain a binary outcome {±1} and average over the results. The outcomes probabilities are p(±1|b) = Tr(ρP b ± ) where P b ± are the two spectral projections of σ b , and the Fisher information of this model can be computed in much the same way as that of the Pauli bases measurements. In Figure 6 the asymptotic MSE Tr(I(ρ|B) −1 G)/N is plotted for different sets of randomly chosen Pauli observables B := {b 1 , . . . , b k }. On comparison with Figure 1 we see that minimum number of measurements that need to measured in the Pauli bases model is much smaller. Additionally, the risk for a full set of measurements is an order of magnitude larger in the 'coarse grained' model. This increase in the asymptotic risk has also been pointed out in [23]. The discrepancy can be explained by noting that the measurement of σ b is a coarse graining of a finer, ONB measurement of a setting such that s i = b i whenever b i = 0. Indeed, using the spectral decomposition of σ b , we can compute its expectation as By replacing the probabilities p(o|s) in the above formula by the empirical frequencies N (o|s)/m we obtain the estimate of the Pauli expectations. However, by constructing this statistic we loose a large amount of information contained in the frequencies, which explains the increase in the MSE.

Conclusions
In this paper we investigated the statistical performance of reduced settings measurements in ion tomography. We did not focus on a particular estimation method but rather on how the accuracy of efficient estimators (which achieve the asymptotic scaling (1) of the MSE) depends on the state and the measurement design. We found that for low rank states, the experimenter can measure a small proportion of randomly selected settings without a significant increase in the MSE. Furthermore we presented a possible line of argument for a mathematical proof based on concentration inequality for the Fisher information. In the case of measurements with respect to random bases we showed that certain states of rank r can be estimated with O(r log d) settings with an increase in MSE compared with designs with a large number of settings. It remains an open question whether the same scaling of the size of the measurement design holds for the Pauli measurements, but we presented strong numerical evidence that the Fisher information may satisfy the required spectral properties. In a future work we plan to apply these ideas and construct estimators in a more realistic setup where the rank is not a priori known.
with the other blocks being zero. We note that both the Fisher, and the weight matrix are of dimension D := 2rd − r 2 − 1.
Bound on the largest eigenvalue-We use the inequality I(ρ 0 |s) ≤ F between the classical and quantum Fisher informations to bound the largest eigenvalue of G −1/2 I(ρ 0 |s)G −1/2 over all measurements by the largest eigenvalue of G −1/2 F (ρ 0 )G −1/2 . The derivation of the quantum Fisher matrix presented here follows [44]. We calculate the quantum Fisher information in the local parametrisation described above, and evaluate it at the diagonal state ρ 0 .
We begin by considering a state ρ θ locally around some arbitrary rank-r state ρ , and write the spectral decomposition as: 3) The quantum Fisher information matrix is defined as: where the symmetric logarithmic derivatives are defined through the equation: We determine the elements of this matrix in the ONB formed by the eigenbasis set {|ψ i } As pointed out in [44], L a θ (and L b θ ) is in principle supported on the full space, but its entries for i, j > r are arbitrary. However, the Fisher information does not use values for which i, j > r. This can be seen by expanding (A.4) in the following way Since the index i ≤ r, (A.6) can be inverted inside the expansion of the Fisher information as The quantum Fisher matrix therefore becomes where we used the fact that ∂ θ a/b ρ θ is self-adjoint. Since ρ θ is parameterised by its matrix elements in the eigenbasis {|λ i } of the state ρ , we can use this to write the partial derivatives out explicitly. Using the notation that r a , c a represents the row and column indices for the parameter θ a , the quantum Fisher matrix now becomes: 4p i (p i + p j ) 2 Re ψ i | |λ ra λ ra | − |λ 1 λ 1 | |ψ j ψ j | |λ r b λ r b | − |λ 1 λ 1 | |ψ i for the diagonal-diagonal block, and for the rest We now evaluate these last two equations at θ = θ 0 , for our special state that is diagonal with entries given by 1/r. At this state |ψ i = |λ i . The diagonal-diagonal block of the Fisher matrix has elements: While the real-real and imaginary-imaginary blocks are diagonal with elements F a,b | θ=θ0 = 4 p ra + p ca (δ ra,r b · δ ca,c b ) (A.9) It is easy to see that the real-diagonal, identity-diagonal blocks are all zero. The realimaginary blocks are zero since we consider only Re [(∂ θa ρ θ ) i,m (∂ θ b ρ θ ) m,i ]. Therefore, the elements of the quantum Fisher matrix are: (i) For the Diagonal-Diagonal block with r > 1, (a) F dd a,a θ=θ0 = 2r when r a ≤ r (b) F dd a,b θ=θ0 = r when r a , r b ≤ r, and a = b (ii) For the Real-Real and Imaginary-Imaginary blocks: (a) F rr/ii a,a θ=θ0 = 2r when r a < c a ≤ r (b) F rr/ii a,a θ=θ0 = 4r when r a ≤ r, c a > r On comparing this with the weight matrix G, we notice that both G and F have the same block diagonal structure, with the off-diagonal blocks being zero. So we can write We notice that F dd = r · G dd , which gives us The minimum eigenvalue of this matrix is r/r + 1 for r > 1 and 1 for pure states.
Putting it all together-We can now substitute these values into the matrix Chernoff bound. While the value of the minimum eigenvalue differs for r > 1 and r = 1, the final bound remains the same because the upper bounds are different in these cases. Therefore here we calculate the bound for the case when r > 1. Writing P S = G −1/2 I S G −1/2 and P = G −1/2 IG −1/2 for notational simplicity, we have for r > 1 P P S ∈ (1 − )P , (1 + )P ≤ 2D · exp −k 2 4(r + 1) · log 2 := δ Therefore, with probability 1 − δ we have that For a fixed value of and δ, we see that the minimum number of settings k required for the above abound to hold with probability greater than 1 − δ is k = C · (r + 1) log 2D δ (A.11) where C := 4(log 2/ 2 ) and D := 2rd − r 2 − 1.