Quantum phase estimation of multiple eigenvalues for small-scale (noisy) experiments

Quantum phase estimation is the workhorse behind any quantum algorithm and a promising method for determining ground state energies of strongly correlated quantum systems. Low-cost quantum phase estimation techniques make use of circuits which only use a single ancilla qubit, requiring classical post-processing to extract eigenvalue details of the system. We investigate choices for phase estimation for a unitary matrix with low-depth noise-free or noisy circuits, varying both the phase estimation circuits themselves as well as the classical post-processing to determine the eigenvalue phases. We work in the scenario when the input state is not an eigenstate of the unitary matrix. We develop a new post-processing technique to extract eigenvalues from phase estimation data based on a classical time-series (or frequency) analysis and contrast this to an analysis via Bayesian methods. We calculate the variance in estimating single eigenvalues via the time-series analysis analytically, finding that it scales to first order in the number of experiments performed, and to first or second order (depending on the experiment design) in the circuit depth. Numerical simulations confirm this scaling for both estimators. We attempt to compensate for the noise with both classical post-processing techniques, finding good results in the presence of depolarizing noise, but smaller improvements in $9$-qubit circuit-level simulations of superconducting qubits aimed at resolving the electronic ground-state of a $H_4$-molecule.

Quantum phase estimation is the workhorse behind any quantum algorithm and a promising method for determining ground state energies of strongly correlated quantum systems. Low-cost quantum phase estimation techniques make use of circuits which only use a single ancilla qubit, requiring classical post-processing to extract eigenvalue details of the system. We investigate choices for phase estimation for a unitary matrix with low-depth noise-free or noisy circuits, varying both the phase estimation circuits themselves as well as the classical post-processing to determine the eigenvalue phases. We work in the scenario when the input state is not an eigenstate of the unitary matrix. We develop a new post-processing technique to extract eigenvalues from phase estimation data based on classical time-series analysis, and contrast this to an analysis via Bayesian methods. We calculate the variance in estimating single eigenvalues via the time-series analysis analytically, finding that it scales to first order in the number of experiments performed, and to first or second order (depending on the experiment design) in the circuit depth. Numerical simulations confirm this scaling for both estimators. We observe numerically that these post-processing techniques lead to a variance in estimating the lowest eigenvalue phase which scales to first order in the overlap between the starting and ground states, to second order in the gap between the ground and excited states, and to second order in the decoherence time of the system. We attempt to compensate for noise in both classical post-processing techniques, finding good results in the presence of depolarizing noise, but smaller improvements in realistic circuit-level simulations.  19 It is known that any problem efficiently solvable on a quantum computer can be formulated as eigenvalue sampling of a Hamiltonian or eigenvalue sampling of a sparse unitary matrix [1]. In this sense the algorithm of quantum phase estimation is the only quantum algorithm which can give rise to solving problems with an exponential quantum speed-up. Despite it being such a central component of many quantum algorithms, very little work has been done so far to understand what quantum phase estimation offers in the current NISQ area of quantum computing [2] where quantum devices are strongly coherence-limited. Quantum phase estimation comes in many variants, but a large subclass of these algorithms (e.g. the semi-classical version of textbook phase estimation [3,4], Kitaev's phase estimation [5], Heisenbergoptimized versions [6]), are executed in an iterative sequential form using controlled-U k gates with a single ancilla qubit [7,8] (see Fig. 1), or by direct measurement of the system register itself [6]. Such circuits are of practical interest in the near term when every additional qubit requires a larger chip and brings in additional experimental complexity and incoherence.

CONTENTS
Some of the current literature on quantum phase estimation works under limiting assumptions. The first is that one does not start in an eigenstate of the Hamiltonian [9,10]. A second limitation is that one does not take into account the (high) temporal cost of running U k [8] for large k when optimizing phase estimation. The size and shallowness of the quantum phase estimation circuit is important since, in the absence of error correction or error mitigation, one expects entropy build-up during computation. This means that circuits with large k may not be of any practical interest.
The scenario where the input state is not an eigenstate of the unitary matrix used in phase estimation is the most interesting one from the perspective of applications, and we will consider it in this work. Such an input state can be gradually projected onto an eigenstate by the phase estimation algorithm and the corresponding eigenvalue can be inferred. However, for coherence-limited low-depth circuits one may not be able to evolve sufficiently long to project well onto one of the eigenstates. This poses the question what one can still learn about eigenvalues using low-depth circuits. An important point is that it is experimentally feasible to repeat many relatively shallow experiments (or perform them in parallel on different machines). Hence we ask what the spectral-resolving power of such phase estimation circuits is, both in terms of the number of applications of the controlled-U circuit in a single experiment, and the number of times the experiment is repeated. Such repeated phase estimation experiments require classical post-processing of measurement outcomes, and we study two such algorithms for doing this. One is our adaptation of the Bayesian estimator of [10] to the multiple-eigenvalue scenario. A second is a new estimator based on a treatment of the observed measurements as a time-series, and construction of the resultant time-shift operator. This latter method is very natural for phase estimation, as one interprets the goal of phase estimation as the reconstruction of frequencies present in the output of a temporal sound signal.
The spectral-resolution power of QPE can be defined by its scaling with parameters of the experiment and the studied system. We are able to derive analytic scaling laws for the problem of estimating single eigenvalues with the time-series estimator. We find these agree with the numerically-observed scaling of both studied estimators. For the more general situation, with multiple eigenvalues and experimental error, we study the error in estimating the lowest eigenvalue numerically. This is assisted by the low classical computation cost of both estimators. We observe scaling laws for this error in terms of the overlap between the ground and starting state (i.e. the input state of the circuit), the gap between the ground and excited states, and the coherence length of the system. In the presence of experimental noise we attempt to adjust our estimators to mitigate the induced estimation error. For depolarizing-type noise we find such compensation easy to come by, whilst for a realistic circuit-level simulation we find smaller improvements using similar techniques.

I. QUANTUM PHASE ESTIMATION
Quantum phase estimation (QPE) covers a family of quantum algorithms which measure a system register of N sys qubits in the eigenbasis of a unitary operator U [5,11] U |φ j = e iφj |φ j , to estimate one or many phases φ j . Quantum phase estimation algorithms assume access to a noisefree quantum circuit which implements U on our system register conditioned on the state of an ancilla qubit. Explicitly, we require the ability to implement where |0 and |1 are the computational basis states of the ancilla qubit, and I is the identity operator on the system register. In many problems in condensed matter physics, materials science, or computational chemistry, the object of interest is the estimation of spectral properties or the lowest eigenvalue of a Hamiltonian H. The eigenvalue estimation problem for H can be mapped to phase estimation for a unitary U τ = exp(−iτ H) with a τ chosen such that the relevant part of the eigenvalue spectrum induces phases within [−π, π). Much work has been devoted to determining the most efficient implementation of the (controlled)-exp(−iτ H) operation, using exact or approximate methods [11][12][13][14]. Alternatively, one may simulate H via a quantum walk, mapping the problem to phase estimating the unitary exp(−i arcsin(H)/λ) for some λ, which may be implemented exactly [15][16][17][18]. In this work we do not consider such variations, but rather focus on the error in estimating the eigenvalue phases of the unitary U that is actually implemented on the quantum computer. In particular, we focus on the problem of determining the value of a single phase φ 0 to high precision (this phase could correspond, for example, to the ground state energy of some Hamiltonian H).
Phase estimation requires the ability to prepare an input, or starting state with good overlap with the ground state; A 0 0. Note here that the spectrum of U may have exact degeneracies (e.g. those enforced by symmetry) which phase estimation does not distinguish; we count degenerate eigenvalues as a single φ j throughout this work. The ability to start quantum phase estimation in a state which already has good overlap with the ground state is a nontrivial requirement for the applicability of the quantum phase estimation algorithm. On the other hand, it is a well-known necessity given the QMA-hardness [19] of the lowest eigenvalue problem. For many quantum chemistry and materials science problems it is known or expected that the Hartree-Fock state has good overlap with the ground state, although rigorous results beyond perturbation theory are far and few between (see e.g. [20]). Beyond this, either adiabatic evolution [12,21] or variational quantum eigensolvers [22] can provide an approximate starting state to improve on via phase estimation.
Phase estimation is not limited to simply learning the value of φ 0 ; it may obtain information about all phases φ j as long as A j > 0. However, the resources required to estimate φ j are bounded by 1/A j . To see this, note that the controlled-unitary U c does not mix eigenstates, and so there is no difference (in the absence of error) between starting with |Ψ and the mixed state The latter is then equivalent to preparing the pure state |φ j with probability A j , so if N preparations of |φ j are required to estimate φ j to an error , the same error margin requires at least N/A j preparations of the state |Ψ . As the number of eigenstates N eig with non-zero contribution to |Ψ generally scales exponentially with the system size N sys , estimating more than the first few φ j (ordered by the magnitude A j ) will be unfeasible. Low-cost (in terms of number of qubits) quantum phase estimation may be performed by entangling the system register with a single ancilla qubit [5,8,10,19]. In Fig. 1, we give the general form of the quantum circuit to be used throughout this paper. An experiment, labeled by a number n = 1, . . . , N , can be split into one or multiple rounds r = 1, . . . , R n , following the preparation of the starting state |Ψ . In each round a single ancilla qubit prepared in the |+ = 1 √ 2 (|0 + |1 ) state controls U kr c where the integer k r can vary per round. The ancilla qubit is then rotated by R z (β r ) (with the phase β r possibly depending on other rounds in the same experiment) and read out in the X-basis, returning a measurement outcome m r ∈ {0, 1}. We denote the strings of chosen integers and phases of a single multi-round experiment by k and β. We denote the number of controlled-U iterations per experiment as K = Rn r=1 k r . We denote the total number of controlled-U iterations over all experiments as As the system register is held in memory during the entire time of the experiment, the choice of K is dictated by the coherence time of the underlying quantum hardware. Hence, we introduce a dimensionless coherence length Here T U is the time required to implement a single application of controlled-U in Eq. (7), and T err is the time-toerror of a single qubit, so that T err /N sys is the time-tofailure of N sys qubits. The idea is that K err bounds the maximal number of applications of U in an experiment, namely K ≤ K err .
A new experiment starts with the same starting state |Ψ . Values of k r and β r may be chosen independently for separate experiments n, i.e. we drop the label n for convenience. We further drop the subscript r from singleround experiments (with R = 1).
In the absence of error, one may calculate the action of the QPE circuit on the starting state (defined in Eq. (3)). Working in the eigenbasis of U on the system register, and the computational basis on the ancilla qubit, we calculate the state following the controlled-rotation U k1 c , and the rotation R z (β 1 ) on the ancilla qubit to be The probability to measure the ancilla qubit in the Xbasis as m 1 ∈ {0, 1} is then and the unnormalized post-selected state of the system register is j a j e i 2 (k1φj +βj ) cos The above procedure may then be repeated for r rounds to obtain the probability of a string m of measurement outcomes of one experiment as Here, φ is the vector of phases φ j and A the vector of probabilities for different eigenstates. We note that P k,β (m|φ, A) is independent of the order in which the rounds occur in the experiment. Furthermore, when N eig = 1, P k,β (m|φ) = P k,β (m|φ, A) is equal to the product of the single-round probabilities P kr,βr (m r |φ), as there is no difference between a multi-round experiment and the same rounds repeated across individual experiments.

II. CLASSICAL DATA ANALYSIS
Two challenges are present in determining φ 0 from QPE experiments. First, we only ever have inexact sampling knowledge of P k,β (m|φ, A). That is, repeated experiments at fixed k, β do not directly determine P k,β (m|φ, A), but rather sample from the multinomial distribution P k,β (m|φ, A). From the measurement outcomes we can try to estimate P k,β (m|φ, A) (and from this φ 0 ) as a hidden variable. Secondly, when N eig > 1 determining φ 0 from P k,β (m|φ, A) poses a non-trivial problem.
Let us first consider the case N eig = 1. Let us assume that we do single-round experiments with a fixed k for each experiment. Naturally, taking k = 1 would give rise to the lowest-depth experiments. If we start these experiments with k = 1 in the eigenstate |φ 0 , then one can easily prove that taking β = 0 or β = π 2 for half of the experiments, suffices to estimate φ 0 with variance scaling as ∼ 1/N = 1/K tot . This result can be derived using standard Chernoff bounds, see e.g. [23,24], and represent standard sampling or shot noise behavior. When N eig = 1, N K-round experiments each with k = 1 are indistinguishable from N × K single-round experiments with k = 1. This implies that the same scaling holds for such multi-round experiments, i.e. the variance scales as 1/(N K) = 1/K tot .
Once the phase φ 0 is known to sufficient accuracy, performing QPE experiments with k > 1 is instrumental in resolving φ 0 in more detail, since the probability of a single-round outcome depends on kφ 0 [6]. Once one knows with sufficient certainty that φ 0 ∈ [(2m − 1)π/k, (2m+1)π/k) (for integer m), one can achieve variance scaling as O( 1 k 2 N ) (conforming to local estimation Cramer-Rao bounds suggested in [10,25]). A method achieving Heisenberg scaling, where the variance scales as ∼ 1/K 2 tot (see Eq. (5)), was analyzed in [6,23]. This QPE method can also be compared with the informationtheoretic optimal maximum-likelihood phase estimation method of [8] where N ∼ log K experiments are performed, each choosing a random k ∈ {1, . . . , K} to resolve φ 0 with error ∼ 1/K. The upshot of these previous results is that, while the variance scaling in terms of the total number of unitaries goes like 1/K tot when using k = 1, clever usage of k > 1 data can lead to 1/K 2 tot scaling. However, as K is limited by K err in near-term experiments, this optimal Heisenberg scaling may not be accessible.
When N eig > 1, the above challenge is complicated by the need to resolve the phase φ 0 from the other φ j . This is analogous to the problem of resolving a single note from a chord. Repeated single-round experiments at fixed k and varying β can only give information about the value of the function: at this fixed k, since This implies that information from single-round experiments at fixed k is insufficient to resolve φ 0 when N eig > 1, as g(k) is then not an invertible function of φ 0 (Try to recover a frequency from a sound signal at a single point in time!). In general, for multi-round experiments using a maximum of K total applications of U c , we may only ever recover g(k) for k ≤ K. This can be seen from expanding P k,β (m|φ, A) as a sum of j A j cos m (φ j ) sin n (φ j ) terms with m + n ≤ K, which are in turn linear combinations of g(k) for k ≤ K. As we will show explicitly in the next Section II A this allows us to recover up to K φ j . However, when N eig > K, these arguments imply that we cannot recover any phases exactly. In this case, the accuracy to which we can estimate our target φ 0 is determined by the magnitude of the amplitude A 0 in the inital state |Ψ as well as the gap towards the other eigenvalues. For example, in the limit A 0 → 1, an unbiased estimation of φ 0 using data from k = 1 would be and the error in such estimation is with our bound being independent of N eig . We are unable to extend this analysis beyond the k = 1 scenario, and instead we study the scaling in this estimation numerically in Sec. III.
In the remainder of this section, we present two estimators for multi-round QPE. The first is a new estimator based on a time-series analysis of the function g(k) that has a low computation overhead. The second is a Bayesian estimator similar to that of [10], but adapted for multiple eigenphases φ j .

A. Time-series analysis
Let us assume that the function g(k) in Eq. (11) is a well-estimated function at all points 0 ≤ k ≤ K, since the number of experiments N is sufficiently large. We may extend this function to all points −K ≤ k ≤ K using the identity g(−k) = g * (k) to obtain a longer signal [26]. We wish to determine the dominant frequencies φ j in the signal g(k) as a function of 'time' k. This can be done by constructing and diagonalizing a time-shift matrix T whose eigenvalues are the relevant frequencies in the signal, as follows.
We first demonstrate the existence of the time-shift matrix T in the presence of N eig < K separate frequencies. Since we may not know N eig , let us first estimate it as l. We then define the vectors g(k) = (g(k), g(k + 1), . . . g(k + l)) T , k = −K, . . . , K. These vectors can be decomposed in terms of single-frequency We can make a l × N eig matrix B with the components b j as columns When N eig ≤ l, the columns of B are typically [27] linearly independent, hence the non-square matrix B is invertible and has a (left)-pseudoinverse B −1 such that This implies that T acts as the time-shift operator mapping g(k) to g(k+1). As the eigenvalues of T are precisely the required phases e iφj in case N eig ≤ l, constructing and diagonalising T will obtain our desired phases including φ 0 . When N eig > l, the eigen-equation for T cannot have the solution b j since these are not linearly independent.
The above proof of existence does not give a method of constructing the time-shift operator T , as we do not have access to the matrices B or D. To construct T from the data that we do have access to, we construct the We can thus attempt to find T as a solution of the (least-squares) problem of minimizing ||T The rank of the obtainedT is bounded by the rank of G (0) . We have that rank(G (0) ) is at most N eig since it is a sum over rank-1 matrices. At the same time rank(G (0) ) ≤ min(l, 2K + 1 − l). This implies that we require both l ≥ N eig and 2K + 1 − l ≥ N eig to obtain a shift matrix T with N eig eigenvalues. This is only possible when K ≥ N eig , giving an upper bound for the number of frequencies obtainable. When G (0) is not full rank (because N eig < l), this problem may have multiple zerosT . However, when N eig < l each of these must satisfyT g(k) = g(k + 1) for −K < k < K − l. Then, as long as rank( It follows that and theñ (20) so everyT obtained in this way must have eigenvalues e iφj .
The above analysis is completely independent of the coefficients A j . However, once the eigenvalues φ j are known, the matrix B (eq. 15) may be constructed, and the A j may be recovered by a subsequent least-squares minimization of This allows us to identify spurious eigenvalues if l > N eig (as these will have a corresponding zero amplitude). Numerically, we find no disadvantage to then choosing the largest l permitted by our data, namely l = K. Assuming a sufficient number of repetitions N these arguments imply that this strategy requires that K ≥ N eig to determine all eigenvalues accurately. However, when K < N eig there still exists a least-squares solutioñ T that minimizes ||T G (0) − G (1) ||. When A 0 0, we expect thatT should have eigenvalues e iφ0 ≈ e iφ0 that we can take as the estimator for φ 0 ; the same is true for any other φ j with sufficiently large A j . In Fig. 2 we show an example of convergence of this estimation for multiple eigenvalues φ j as K → N eig in the case where g(k) is known precisely (i.e. in the absence of sampling noise). The error |φ 0 −φ 0 | when K < N eig depends on the eigenvalue gap above φ 0 , as well as the relative weights A j , as we will see in Section III B.
In Appendix B we derive what variance can be obtained with this time-series method in the case l = N eig = 1, using single-round circuits with k = 1 up to K. Our analysis leads to the following scaling in N and K: We will compare these results to numerical simulations in Sec. III A.

Estimating g(k)
The function g(k) cannot be estimated directly from experiments, but may instead be created as a linear combination of P k,β (m|φ, A) for different values of k and β. For single-round experiments, this combination is simple to construct: For multi-round experiments, the combination is more complicated. In general, P k,β (m|φ, A) is a linear combination of real and imaginary parts of g(l) with l < K = r k r . This combination may be constructed by writing cos 2 (kφ j /2 + β/2) and sin 2 (kφ j /2 + β/2) in terms of exponentials, and expanding. However, inverting this linear equation is a difficult task and subject to numerical imprecision. For some fixed choices of experiments, it is possible to provide an explicit expansion. Here we focus on K-round k = 1 experiments with K/2 β = 0 and K/2 β = π 2 final rotations during each experiment (choosing K even). The formula for P k,β (m|φ, A) is independent of the order in which these rounds occur. Let us write P(m, n|φ, A) as the probability of seeing both m ∈ {0, . . . , K/2} outcomes with m r = 1 in the K/2 rounds with β r = 0 and n ∈ {0, . . . , K/2} outcomes with n r = 1 in the K/2 rounds with β r = π/2. That is, m, n are the Hamming weights of the measurement vector m split into the two types of rounds described above. Then, one can prove that, for 0 ≤ k ≤ K/2: where The proof of this equality can be found in Appendix A.
Calculating g(k) from multi-round (k = 1) experiments contains an additional cost: combinatorial factors in Eq. (24) relate the variance in g(k) to the variance in P(m, n|φ, A) but the combinatorial pre-factor k l can increase exponentially in k. This can be accounted for by replacing the least squares fit used above with a weighted least squares fit, so that one effectively relies less on the correctness of g(k) for large k. To do this, we construct the matrix T row-wise from the rows g This equation may be weighted by multiplying G (0) and g where σ G (1) i,j is the standard deviation in our estimate of i,j . Note that the method of weighted least-squares is only designed to account for error in the independent variable of a least squares fit, in our case this is G (1) . This enhanced effect of the sampling error makes the timeseries analysis unstable for large K. We can analyze how this weighting alters the previous variance analysis when N eig = 1. If we take this into account, see the derivation in Appendix B, we find that for a time-series analysis applied to multi-round k = 1 experiments.

Classical computation cost
In practice, the time-series analysis can be split into three calculations; (1) estimation of P k,β (m|φ, A) or P(m, n|φ, A), (2) calculation of g(k) from these probabilities via Eq. (23) or Eq. (24), and (3) estimation of the phases φ from g(k). Clearly (2) and (3) only need to be done once for the entire set of experiments.
The estimation of the phases φ requires solving two least squares equations, with cost O(l 2 K) (recalling that l is the number of frequencies to estimate, and K is the maximum known value of g(k)), and diagonalizing the matrix T with cost O(l 3 ). For single-round phase estimation this is the dominant calculation, as calculating g(k) from Eq. (23) requires simply K additions. As a result this estimator proves to be incredibly fast, able to estimate one frequency from a set of N = 10 6 experiments of up to K = 10000 in < 100 ms, and l = 1000 frequencies from N = 10 6 experiments with K = 1000 in < 1 min. However, for multi-round phase estimation the calculation of g(k) in Eq. (24) scales as O(K 4 ). This then dominates the calculation, requiring 30 s to calculate 50 points of g(k). (All calculations performed on a 2.4 GHz Intel i3 processor.) We note that all the above times are small fractions of the time required to generate the experimental data when N K, making this a very practical estimator for near-term experiments.

B. Efficient Bayesian analysis
When the starting state is the eigenstate |φ 0 , the problem of determining φ 0 based on the obtained multiexperiment data has a natural solution via Bayesian methods [10,28]. Here we extend such Bayesian methodology to a general starting state. For computational efficiency we store a probability distribution over phases P (φ) using a Fourier representation of this periodic function P (φ) (see Appendix C). This technique can also readily be applied to the case of Bayesian phase estimation applied to a single eigenstate.
A clearly information-theoretic optimal Bayesian strategy is to choose the φ and A based on the data obtained in some N experiments [8]. After these N experiments, leading to data {m i } N i=1 , one can simply choose A, φ which maximizes the posterior distribution: That is, one chooses A possible way of implementing this strategy is to (1) assume the prior distribution to be independent of A and φ and (2) estimate the maximum by assuming that the derivative with respect to A and φ vanishes at this maximum.
Instead of this method we update our probability distribution over φ and A after each experiment. After experiment n the posterior distribution P n (φ, A) via Bayes' rule reads To calculate the updates we will assume that the distribution over the phases φ j and probabilities A j are independent, that is, As prior distribution we take P 0 (φ, A) = P prior (A)P prior (φ) with a flat prior P prior (φ) = ( 1 2π ) Neig , given the absence of a more informed choice. We take P prior (A) = e −(A−A0) 2 /2Σ 2 , with A 0 and Σ 2 approximate mean and covariance matrices. We need to do this to break the symmetry of the problem, so thatφ 0 is estimating φ 0 and not any of the other φs. We numerically find that the estimator convergence is relatively independent of our choice of A 0 and Σ 2 .
The approximation in Eq. (31) allows for relatively fast calculations of the Bayesian update of P j n (φ j ), and an approximation to the maximum-likelihood estimation of P red n (A). Details of this computational implementation are given in App. C 1.

Classical computation cost
In contrast to the time-series estimator, the Bayesian estimator incurs a computational cost in processing the data from each individual experiment. On the other hand, obtaining the estimateφ 0 for φ 0 is simple, once one has the probability distribution P j=0 (φ): A key parameter here is the number of frequencies N freq stored in the Fourier representation of P (φ); each update requires multiplying a vector of length N freq by a sparse matrix. Our approximation scheme for calculating the update to A makes this multiplication the dominant time cost of the estimation. Unfortunately, we find a large truncation error when N freq N , and so the computation cost scales as N 2 . In practice we find that processing the data from N < 10 4 experiments takes seconds on a classical computer, but processing more than 10 5 experiments becomes rapidly unfeasible.

C. Experiment design
Based on the considerations above we seek to compare some choices for the meta-parameters in each experiment, namely the number of rounds, and the input parameters k r and β r for each round.
Previous work [10,29], which took as a starting state the eigenstate |φ 0 , formulated a choice of k and β, using single-round experiments and Bayesian processing, Roughly, this heuristic adapts to the expected noise in the circuit by not using any k such that the implementation of U k takes longer than T err /N sys . It also adapts k to the standard-deviation of the current posterior probability distribution over φ 0 : a small standard-deviation after the nth experiment implies that k should be chosen large to resolve the remaining bits in the binary expansion of φ 0 [30].
In this work we use a starting state which is not an eigenstate, and as such we must adjust the choice in Eq. (32). As noted in Sec. II, to separate different frequency contributions to g(k) we need good accuracy beyond that at a single value of k. The optimal choice of the number of frequencies to estimate depends on the distribution of the A j , which may not be well known in advance. Following the inspiration of [10], we choose for the Bayesian estimator We thus similarly bound K depending how well one has already converged to a value for φ 0 which constitutes some saving of resources. At large N we find numerically little difference between choosing k at random from {1, . . . , K} and cycling through k = 1, . . . , K in order.
For this Bayesian estimator we draw β at random from a uniform distribution [0, 2π]. We find that the choice of β has no effect on the final estimation (as long as it is not chosen to be a single number). This makes sense, as the Fisher information obtained from QPE experiments is independent of β [31]. For the time-series estimator applied to single-round experiments, we choose to cycle over k = 1, . . . , K so that it obtains a complete estimate of g(k) as soon as possible, taking an equal number of experiments with final rotation β = 0 and β = π/2 at each k. Here again K ≤ K err , so that we choose the same number of experiments for each k ≤ K. For the time-series estimator applied to multi-round experiments, we choose an equal number of rounds with β = 0 and β = π/2, taking the total number of rounds equal to R = K.

III. RESULTS WITHOUT EXPERIMENTAL NOISE
We first focus on the performance of our estimators in the absence of experimental noise, to compare their relative performance and check the analytic predictions in Sec. II A. Although with a noiseless experiment our limit for K is technically infinite, we limit it here to make connection with the noisy results of the following section. Throughout this section we generate results directly by calculating the function P k,β (m|φ, A) and sampling from it. Note that P k,β (m|φ, A) only depends on N eig and not on the number of qubits in the system N sys .

A. Single eigenvalues
To confirm that our estimators achieve the scaling bounds discussed previously, we test them initially on the single-eigenvalue scenario N eig = 1. In Fig. 3, we plot the scaling of the average absolute error in an estimationφ a single eigenvalue φ ∈ [−π, π]: over N experiments and varying K. We see that both estimators achieve the previouslyderived bounds in II A (overlayed as dashed lines), and both estimators achieve almost identical convergence rates. The results for the Bayesian estimation match the scaling observed in Ref. [10]. Due to the worse scaling in K, the multi-round k = 1 estimation significantly underperforms single-round phase estimation. This is a key observation of this paper, showing that if the goal is to estimate a phase rather than to project onto an eigenstate, it is always preferable to do single-round experiments.

B. Two eigenvalues
The ability of QPE to resolve separate eigenvalues at small K can be tested in a simple scenario of two eigenvalues, φ 0 and φ 1 . The input to the QPE procedure is then entirely characterized by the overlap A 0 with the target state |φ 0 , and the gap δ = |φ 0 − φ 1 |.
In Fig. 4, we study the performance of our time-series estimator in estimating φ 0 after N = 10 6 experiments with K = 50, measured again by the mean error (Eq. (34)). We show a two-dimensional plot (averaged over 500 resolutions at each point A 0 , δ) and log-log plots of one-dimensional vertical (lower left) and horizontal (lower right) cuts through this surface. Due to computational costs, we are unable to perform this analysis with the Bayesian estimator, or for the multi-round scenario. We expect the Bayesian estimator to have similar performance to the time-series estimator (given their close comparison in Sec. III A). We also expect the error in multi-round QPE to follow similar scaling laws in A 0 and δ as single-round QPE (i.e. multi-round QPE should be suboptimal only in its scaling in K).
The ability of our estimator to estimate φ 0 in the presence of two eigenvalues can be split into three regions (marked as (a), (b), (c) on the surface plot). In region (a), we have performed insufficient sampling to resolve the eigenvalues φ 0 and φ 1 , and QPE instead estimates the weighted average phase A 0 φ 0 + A 1 φ 1 . The error in the estimation of φ 0 then scales by how far it is from the average, and how well the average is resolved In region (b), we begin to separate φ 0 , from the unwanted frequency φ 1 , and our convergence halts, In region (c), the gap is sufficiently well resolved and our estimation returns to scaling well with N and K The scaling laws in all three regions can be observed in the various cuts in the lower plots of Fig. 4. We note that the transition between the three regions is not sharp (boundaries estimated by hand), and is K and Ndependent.

C. Many eigenvalues
To show that our observed scaling is applicable beyond the toy 2-eigenvalue system, we now shift to studying systems of random eigenvalues with N eig > 1. In keeping with our insight from the previous section, in Fig. 5 we fix φ 0 = 0, and study the error as a function of the gap We fix A 0 = 0.5, and draw the other parameters for the system from a uniform distribution: . We plot both the average error (line) and the upper 47.5% confidence interval [ , + 2σ ] (shaded region) for various choices of N eig . We observe that increasing the number of spurious eigenvalues does not critically affect the error in estimation; indeed the error generally decreases as a function of the number of eigenvalues. This makes sense; at large N eig the majority of eigenvalues sit in region (c) of Fig. 4, and we do not expect these to combine to distort the estimation. We further note that the worst-case error remains that of two eigenvalues at the crossover between regions (a) and (b). In App. D we study the effect of confining the spurious eigenvalues to a region [δ, φ max ]. We observe that when most eigenvalues are confined to regions (a) and (b), the scaling laws observed in the previous section break down, however the worstcase behaviour remains that of a single spurious eigenvalue. This implies that sufficiently long K is not a requirement for QPE, even in the presence of large systems or small gaps δ; it can be substituted by sufficient repetition of experiments. However, we do require that the ground state is guaranteed to have sufficient overlap with the starting state -A 0 > 1/K (as argued in Sec. II). However, as QPE performance scales better with K than it does with N , a quantum computer with coherence time 2T is still preferable to two quantum computers with coherence time T (assuming no coherent link between the two).

IV. THE EFFECT OF EXPERIMENTAL NOISE
Experimental noise currently poses the largest impediment to useful computation on current quantum devices. As we suggested before, experimental noise limits K so that for K K err the circuit is unlikely to produce reliable results. However, noise on quantum devices comes in various flavours, which can have different corrupting effects on the computation. Some of these corrupting effects in particular systematic errors may be compensated for with good knowledge of the noise model. For example, if we knew that our system applied U = e −iH(t+δt) instead of U = e −iHt , one could divideφ 0 by (t + δ t )/t to precisely cancel out this effect. In this study we have limited ourselves to studying and attempting to correct two types of noise: depolarizing noise, and circuit-level simulations of superconducting qubits. Given the different effects observed, extending our results to other noise channels is a clear direction for future research. In this section we do not study multi-round QPE, so each experiment consists of a single round. A clear advantage of the single-round method is that the only relevant effect of any noise in a single-round experiment is to change the outcome of the ancilla qubit, independent of the number of system qubits N sys .

A. Depolarizing noise
A very simple noise model is that of depolarizing noise, where the outcome of each experiment is either correct with some probability p or gives a completely random bit with probability 1 − p. We expect this probability p to depend on the circuit time and thus the choice of k ≥ 0, i.e.
We can simulate this noise by directly applying it to the calculated probabilities P k,β (m|φ) for a single round In Fig. 6, we plot the convergence of the time-series (blue) and Bayesian (green) estimators as used in the previous section as a function of the number of experiments, with fixed K = 50 = K err /2 fixed, A 0 = 0.5, N eig = 10 and δ = 0.5. We see that both estimators obey N −1/2 scaling for some portion of the experiment, however this convergence is unstable, and stops beyond some critical point. Both the Bayesian and time-series estimator can be adapted rather easily to compensate for this depolarizing channel. To adapt the time-series analysis, we note that the effect of depolarizing noise is to send g(k) → g(k)p(k) when k > 0, via Eq. (23) and Eq. (40). Our time-series analysis was previously performed over the range k = −K, . . . , K (getting g(−k) = g * (k) for free), and over this range g(k) → g(k)p(|k|).
(41) g(k) is no longer a sum of exponential functions over our interval [−K, K], as it is not differentiable at k = 0, which is the reason for the failure of our time-series analysis. However, over the interval [0, K] this is not an issue, and the time-series analysis may still be performed. If we construct a shift operator T using g(k) from k = 0, . . . , K, this operator will have eigenvalues e iφj −1/Kerr . This then implies that the translation operator T can be calculated using g(k) with k > 0, and the complex argument of the eigenvalues of T give the correct phases φ j . We see that this is indeed the case in Fig. 6 (orange line). Halving the range of g(k) that we use to estimate φ 0 decreases the estimator performance by a constant factor, but this can be compensated for by increasing N . Adapting the Bayesian estimator requires simply that we use the correct conditional probability, Eq. (40). Our Fourier representation of the probability distribution of φ 0 can be easily adjusted to this change. The results obtained using this compensation are shown in Fig. 6: we observe that the data agains follows a N −1/2 scaling.

B. Realistic circuit-level noise
Errors in real quantum computers occur at a circuitlevel, where individual gates or qubits get corrupted via various error channels. To make connection to current experiments, we investigate our estimation performance on an error model of superconducting qubits. Full simulation details can be found in App. E. Our error model is primarily dominated by T 1 and T 2 decoherence, incoherent two-qubit flux noise, and dephasing during single-qubit gates. We treat the decoherence time T err = T 1 = T 2 as a free scale parameter to adjust throughout our simulations, whilst keeping all other error parameters tied to this single scale parameter for simplicity. In order to apply circuit-level noise we must run quantum circuit simulations, for which we use the quantumsim density matrix simulator first introduced in [32]. We then choose to simulate estimating the ground state energy of four hydrogen atoms in varying rectangular geometries, with Hamiltonian H taken in the STO-3G basis calculated via psi4 [33], requiring N sys = 8 qubits. We make this estimation via a lowest-order Suzuki-Trotter approximation [34] to the time-evolution operator e −iHt . To prevent energy eigenvalues wrapping around the circle we fix t = 1/ Trace[H † H]/(2 Nsys ) [35] The resultant 9qubit circuit is made using the OpenFermion package [9].
In lieu of any circuit optimizations (e.g. [15,36]), the resulting circuit has a temporal length per unitary of T U = 42 µs (with single-(two-) qubit gate times 20 ns (40 ns)). This makes the circuit unrealistic to operate at current decoherence times for superconducting circuits, and we focus on decoherence times 1 − 2 orders of magnitude above what is currently feasible, i.e. T err = 5−50 ms. However one may anticipate that the ratio T U /T err can be enlarged by circuit optimization or qubit improvement. Naturally, choosing a smaller system, less than 8 qubits, or using error mitigation techniques could also be useful.
We observe realistic noise to have a somewhat different effect on both estimators than a depolarizing channel. Compared to the depolarizing noise, the noise may (1) be biased towards 0 or 1 and/or (2) its dependence on k may not have the form of Eq. (39).
In Fig. 7, we plot the performance of both estimators at four different noise levels (and a noiseless simulation to compare), in the absence of any attempts to compensate for the noise. Unlike for the depolarizing channel, where a N −1/2 convergence was observed for some time before the estimator became unstable, here we see both instabilities and a loss of the N −1/2 decay to begin with. Despite this, we note that reasonable convergence (to within 1 − 2%) is achieved, even at relatively low coherence times such as K err = 10. Regardless, this lack of convergence is The time-series analysis requires N > 2K experiments in order to produce an estimate, and so its performance is not plotted for N < 100.
worrying, and we now shift to investigating how well it can be improved for either estimator.
Adjusting the time-series estimator to use only g(k) for positive k gives approximately 1 − 2 orders of magnitude improvement. In Fig. 8, we plot the estimator convergence with this method. We observe that the estimator is no longer unstable, but the N −1/2 convergence is never properly regained. We may study this convergence in greater deal for this estimator, as we may extract g(k) directly from our density-matrix simulations, and thus investigate the estimator performance in the absence of sampling noise (crosses on screen). We note that similar extrapolations in the absence of noise, or in the presence of depolarizing noise (when compensated) give an error rate of around 10 −10 , which we associate to fixed-point error in the solution to the least squares problem (this is also observed in the curve without noise in Fig. 8). Plotting this error as a function of K err shows a power-law decay -∝ K −α err ∝ T −α err with α = 1.9 ≈ 2. We do not have a good understanding of the source of the obtained power law.
The same compensation techniques that restored the performance of the Bayesian estimator in the presence of depolarizing noise do not work nearly as well for realistic noise. Most likely this is due to the fact that the actual noise is not captured by a k-dependent depolarizing probability. In Fig. 9, we compare the results of Fig. 7 to that of the Bayesian estimator with noise. We observe a factor 2 improvement at low T err , however the N −1/2 scaling is not regained, and indeed the estimator performance appears to saturate at roughly this point. Furthermore, at T err = 50 ms, the compensation techniques do not improve the estimator, and indeed appear to make it more unstable. To investigate this further, in Fig. 9 (inset) we plot a Bayes Factor analysis of the Bayesian estimators with and without compensation techniques. The Bayes Factor analysis is obtained by calculating the Bayes Factors where M is the chosen Bayesian model (including the prior knowledge), and M 0 is a reference model, and P (m|M ) is the probability of observing measurement m given model M . As a reference model we take that of random noise -P (m|M 0 ) = 0.5. We observe that at large T err the Bayes factor with compensation falls below that without, implying that the compensation techniques make the model worse. We also observe that at very small T err , the estimator makes worse predictions than random noise (log(F ) < 0). Despite our best efforts we have been unable to further improve the Bayesian estimator in noisy single-round QPE experiments.

V. DISCUSSION
In this work, we have presented and studied the performance of two estimators for quantum phase estimation at low K for different experiment protocols, different systems (in particular those with one vs many eigenvalues), and under simplistic and realistic noise conditions. From our numerical studies, we observe scaling laws for our time-series estimator; we find it first-order sensitive to the overlap A 0 between starting state and ground state, second-order sensitive to the gap between the ground state and the nearest eigenstates, and secondorder sensitive to the coherence time of the system. The Bayesian estimator appears to perform comparably to the time-series estimator in all circumstances, and thus should obey similar scaling laws.
We further observe that realistic noise has a worse effect on QPE than a depolarizing channel, for which the effects can largely be mitigated. We have numerically explored (but not reported) multi-round QPE in the presence of noise. Since each experiment has multiple outputs, it is harder to adapt the classical data analysis to the presence of noise and our results for realistic noise have not been convincing so far. Since the performance of multi-round noiseless QPE is already inferior to singleround noiseless QPE, we do not advocate it as a nearterm solution, although, for noiseless long circuits it does have the ability to project onto a single eigenstate, which single-round QPE certainly does not.
Despite our slightly pessimistic view of the effect of errors on the performance of QPE, we should note that the obtained error of 10 −3 at T err ≈ 13N sys T U or K err = 13 would be sufficient to achieve chemical accuracy in a small system. However, as the energy of a system scales with the number of particles, if we require a Hamiltonian's spectrum to fit in [−π, π], we will need a higher resolution for QPE, making error rates of 10 −3 potentially too large. This could potentially be improved by improving the compensation techniques described in the text, applying error mitigation techniques to effectively increase T err , or by using more well-informed prior distri-butions in the Bayesian estimator to improve accuracy. All of the above are obvious directions for future work in optimizing QPE for the NISQ era. Another possible direction is to investigate QPE performance in other error models than the two studied here. Following ref. [6], we expect SPAM error to be as innocuous as depolarizing noise. However, coherent errors can be particularly worrying as they imitate alterations to the unitary U . The time-series estimator is a clear candidate for such a study, due to its ease in processing a large number of experiments and its ability to predict its own performance in the absence of sampling noise. We also expect that it is possible to combine the time-series estimator with the Heisenberg-limited scaling methods of Refs. [6,23] so as to extend these optimal methods to the multipleeigenvalue scenario with N eig > 1 eigenvalues, and that these methods could be extended to analog or ancilla-free QPE settings such as described in Ref. [6].
In this work we do not compare the performance of quantum phase estimation with purely classical methods: such methods could be competitive for small systems since they could simply try to extract φ 0 from calculating TrH k |Ψ Ψ| for small k, given a classical description of Ψ. In particular when quantum circuits are limited to constant-depth circuits, implying that for generic U one can have at most K = O(1) not scaling with N sys , the QPE method is unlikely to offer computational advantages. search Projects Activity (IARPA), via the U.S. Army Research Office grant W911NF-16-1-0071. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
Appendix A: Derivation of the identity in Eq. (25) One first writes for 0 ≤ k ≤ K/2: where P(m 1 , . . . , m K/2 , n 1 , . . . n K/2 |φ, A) is the probability for a specific series of outcomes m 1 , . . . , m K/2 for β = 0 and n 1 , . . . , n K/2 for β = π/2. To see that the above is true, note that it is quickly true for N eig = 1 by using Eq. (23) for g (1). By linearity on the left and right hand side it then holds generally. Since the order of the outcomes of the rounds does not matter, i.e. P(m 1 , . . . , m K/2 , n 1 , . . . n K/2 |φ, A) only depends on the Hamming weights m = |m| and n = |n|, we can symmetrize the coefficient over permutations of the rounds and replace P(m 1 , . . . , m K/2 , n 1 , . . . n K/2 |φ, A) by P(m, n|φ, A)/( K/2 m K/2 n ). This gives the following expression for χ k (m, n): where m i is the ith bit of a bitstring with Hamming weight m (and similarly n i ), and S K/2 is the symmetric group of permutations. We can expand this last expression as The sum π:m π(1) ...m π(l) is even can be written as a sum over permutations such that m π(1) . . . m π(l) has Hamming weight 2p with p = 0, 1, . . . l/2 . Then one counts the number of permutations of a K/2-bitstring of Hamming weight m such that some segment of length l has Hamming weight 2p which equals m 2p K/2−m l−2p l! (K/2 − l)!. All together this leads to χ k (m, n) in Eq. (25). It is not clear whether one can simplify this equality or verify it directly using other combinatorial identities or (Chebyshev) polynomials.

Appendix B: Variance calculations for time-series estimator
For the case of estimating a single eigenvalue using single-round QPE with the time-series estimator, one can directly calculate the error in the estimation. In this situation, our matrices G 0 and G 1 are column vectors, G T 1 = (g(−K + 1), g(−K + 2), . . . , g(K)).

(B2)
The least-squares solution for T is then For a single frequency, g(k) = e ikφ , and immediately T = e iφ . However, we estimate the real and imaginary components of g(k) separately. Let us write in terms of our independent components remembering that g 0 k = g 0 −k and g 1 k = −g 1 −k (i.e. the variables are correlated). Our target angle φ = tan −1 T i /T r , and so we can calculate Let us expand out our real and imaginary components of T : Then, we can calculate their derivatives as (recalling again that g 0 k = g 0 −k and g 1 .

(B9)
Substituting in for g a k , we find that everything precisely cancels when k = K! ∂T r ∂g 0 Our variance is then If g a K is estimated with N shots, we expect Var[g 0 K ] = 1 N , and As described in Sec. II A 1, for multi-round experiments we weight the least-squares inversion as per Eq. (27). This weighting adjusts the g a k values in Eqs. (B8,B9) so that ∂φ ∂g A k is no longer zero when k < K. The sum over k in Eq. (B5) then lends an extra factor of K to the variance, reducing it to Appendix C: Fourier representation for Bayesian updating For simplicity, we first consider when the starting state is a simple eigenstate |φ j . After each multi-round experiment we would like to update the probability distribution P (φ j = φ), i.e. P n (φ) = P k,β (m|φ) P (m) P n−1 (φ). We will represent the 2π-periodic probability distribution P n (φ) by a Fourier series with a small number of Fourier coefficients N freq which are updated after each experiment, that is, we write We thus collect the coefficients as a N freq -component vector p. The Fourier representation has the advantage that integration is trivial i.e. π −π P (φ)dφ = 2πp 0 so that the probability distribution is easily normalized. In addition, the current estimateφ is easy to obtain: φ = arg( e iφ P ) = arg(p 2 + ip 1 ).
Note that the variance in our estimation of any eigenvalue φ is bounded from above by π 2 . The other advantage of the Fourier representation is that a singleround in an experiment is the application of a sparse matrix on p. One has P (φ) → P kr,βr (m r |φ)P (φ) = cos 2 (k r φ/2 + γ/2)P (φ), where γ = β r + m r π which is equivalent to The coefficients of the update matrices M 0,1 (k r ) can be simply calculated using the double angle formulae and employing cos 2 (kφ/2 + γ/2) cos(jφ) = 1 2 cos(jφ) + 1 4 cos(γ) (cos((j + k)φ) + cos((j − k)φ)) and cos 2 (kφ/2 + γ/2) sin(jφ) The matrices M a (n) are then calculated from the above equations. When j > k, we have When j ≤ k, we have to account for the sign change in sin((j − k)φ): For a multi-round experiment with R rounds, one thus applies such sparse matrices to the vector p R times. Note that each round with given k r requires at most k r more Fourier components, hence an experiment with at most K controlled-U applications adds at most K Fourier components. Thus, when the total number of unitary rotations summed over all experiments K tot = n r k r > N freq , our representation of the distribution is no longer accurate. When K tot ≤ N freq on the other hand, it will be accurate.

Bayesian updating for multi-eigenvalue starting state
In this section we detail the method by which we store the distributions P j n (φ j ) and P red n (A) of Eq. (31) and perform the Bayesian update of Eq. (30). We do so by representing the marginal probabilities P j n (φ j ) by a Fourier series with a small number of Fourier coefficients which are updated after each experiment as shown in the previous section. We assume that there are most N eig coefficients A j > 0 and thus N eig φ j .
From our independence assumption, individual updates of P j (φ j ) may be calculated by integrating out the other unknown variables in Eq. (30): Expanding the conditional probability of Eq. (10) and rewriting leads to the form and B j = dA P red n−1 (A)A j . Here we have used that dφ l P l n−1 (φ l ) = 1. One can concisely write B j as the components of a vector B. Computing Eq. (30) then involves creating an 'update' distribution for each φ j , calculating the integral of each distribution, and then form-ing the new distribution from a weighted sum from the 'update' distributions.
Calculating the distribution P red n (A) is complicated slightly by the restriction that j A j = 1, A j ≥ 0, meaning that we cannot assume the distribution of individual A j terms is uncorrelated. The marginal probability distribution equals where the jth component (q n−1 ) j is the integral (q n−1 ) j = dφ j P j n−1 (φ j ) r P kr,βr (m r |φ j ). (C10) As A only enters our estimation through the vector B = (B 0 , . . . , B Neig ), we only need approximate this value. Assuming we know the marginal probabilities P n (φ j ) for all experiments n = 1, . . . , N , we can estimate B after all experiments by the maximum likelihood value A (max) , Evaluating this equation for up N = 1000 experiments, taking N freq = 10000 frequency components of N eig = 2 eigenvalues takes less than a second on a laptop using a method such as sequential least-squares programming [37]. However, beyond this it becomes fairly computationally intensive. Thus, after N > N cut−off = 100 experiments have been performed, we switch to a local optimization method. We determine the optimal B n after n experiments from its prior value B n−1 via a single step of an approximate Newton's method, that is, we take where ∇f (A) is the first derivative of f at A and H is the Hessian matrix of f , i.e. H ij = ∂ Ai ∂ Aj f (A). Here Π[A] is the projector onto the plane Neig j=0 A j = 1 so that the update preserves the normalization. We have We approximate the second term for each step as coming from only from the added term, i.e.
The Hessian equals but we approximate this at the nth step This approximation allows H to be updated without summing over each experiment.
With the above implemented, we observe that our estimator can process data from N = 10, 000 experiments to estimate N eig = 2 eigenvalues with N = 20, 000 Fourier components within approximately two minutes on a laptop. Unfortunately, this method scales as N 2 , as the number of frequencies required for accurate estimation grows as the total number of unitaries applied.
As the mean, variance and integration calculations only require the first few frequencies of the distribution, it may be possible to reduce this cost by finding approximation techniques for higher frequency components.
Appendix D: Convergence of the (noiseless) time-series analysis in case of multiple eigenvalues.
In this section we present an expansion of Fig. 5, namely Fig. 10, by drawing the spurious eigenvalues φ j from a range closer to the target eigenvalue φ 0 . This negates the drop in estimation error observed in Fig. 5 that was caused by the majority of eigenvalues lying in region (c) of Fig. 4. We observe that for certain gaps δ, multiple eigenvalues confined to a thin region [δ, φ max ] can have a worse effect on our ability to estimate φ 0 than that of a single eigenvalue at δ. However, this loss in accuracy does not get critically worse with the addition of more eigenvalues. Neither is it worse than the worstpossible estimation with two eigenvalues.  the four H atoms are in the positions (±d x /2, ±d y /2, 0)). We calculate the Hartree-Fock and full-CI solutions to the ground state using the psi4 package [33] with the openfermion interface [9]. This allows to calculate the true ground state energy E 0 for each geometry, and the overlap A 0 between the ground state and the Hartree-Fock state, which we choose as our starting state |Ψ . Due to symmetry and particle number conservation, |Ψ has non-zero overlap with only 8 eigenstates of the full-CI solution, separated from the ground state by a minimum gap δ. (When d x = d y , the true ground state of H 4 is actually orthogonal to the Hartree-Fock state, and so we do not include any such geometries in our calculation.) The full error in our calculation of the energy (at a fixed geometry) is then a combination of three separate contributions: basis set error (i.e. from the choice of orbitals), Trotter error, and the estimator error studied in this work (which includes error from experimental noise). The Trotter error Trotter is reasonably large due to our use of only the first-order Suzuki-Trotter approximation U = i e −iHit ≈ e −iHt . Higher-order Suzuki-Trotter expansions require longer quantum circuits, which in turn increase the estimator error from experimental noise. Balancing these two competing sources of error is key to obtaining accurate calculations and a clear target for future study. In Tab. I, we list some parameters of interest for each studied geometry. We normalize the gap and the Trotter error by the Frobenius norm H F = Trace[H † H]/2 Nsys , as we chose an evolution time t = 1/ H F , making this the relevant scale for comparison with scaling laws and errors calculated in the text.

Error model and error parameters
Throughout this work we simulate circuits using an error model of superconducting qubits first introduced in Ref. [32]. This captures a range of different error channels   [32], with all parameters taken from therein (with the exception of the 1/f flux noise, which is made incoherent as described in text).
with parameters either observed in experimental data or estimated via theory calculations. All error channels used are listed in Tab. II, and we will now describe them in further detail. Transmon qubits are dominated primarily by decoherence, which is captured via T 1 and T 2 channels [4]. Typical T 1 and T 2 times in state-of-the-art devices are approximately 10 − 100 µs. As other error parameters are derived from experimental results on a device with T 1 = T 2 ≈ 30 µs, we take these as a base set of parameters [38,39]. Single-qubit gates in transmon qubits incur slight additional dephasing due to inaccuracies or fluctuations in microwave pulses. We assume such dephasing is Markovian, in which case it corresponds to a shrinking of the Bloch sphere along the axis of rotation by a value 1 − p axis , and into the perpendicular plane by a value 1 − p plane . We take typical values for these parameters as p axis = 10 −4 , p plane = 5 · 10 −4 [32].
Two-qubit gates in transmon qubits incur dephasing due to 1/f flux noise. Assuming that the phase in an ideal C-Phase gate G = diag(1, 1, 1, e iφ )) is controlled by adjusting the time of application, this suggests a model for the applied gate which is where δ flux is drawn from a normal distribution around 0 with standard deviation σ flux . One can estimate σ flux ≈ 0.01 rad for a typical gate length of 40 ns [32]. The noise is in general non-Markovian, as δ flux fluctuates on longer timescale than a single gate. However, to make the simulation tractable, we approximate it as Markovian.
The Pauli transfer matrix of this averaged channel [40] reads where the Pauli transfer matrix of a channel G is given by Λ[G] i,j = Tr[σ i Gσ j ]. During qubit readout, we assume that the qubit is completely dephased and projected into the computational basis. We then allow for a T meas = 300 ns period of excitation and de-excitation (including that from T 1 -decay), during which the qubit state is copied onto a classical bit. This copying is also assumed to be imperfect, with a probability RO of returning the wrong result. The qubit then has an additional T dep = 300 ns waiting period before it may participate in gates again (to allow resonator photons to deplete [38]), over which additional excitation and de-excitation may occur. Though simple, this description is an accurate model of experimental results. Typically experiments do not observe measurement-induced excitation to the |1 state, but do observe measurement-induced decay [32]. Typical values of such decay are 0.005 prior to the copy procedure, and 0.015 after.
Though reasonably accurate, this error model does fail to capture some details of real experimental systems. In particular, we do not include leakage to the |2 state, which is a dominant source of two-qubit gate error. Furthermore, we have not included cross-talk between qubits.
To study the effect of changing noise levels while staying as true as possible to our physically-motivated model, we scale our noise parameters by a dimensionless parameter λ such that the contribution from each error channel to the simulation remains constant. In Tab. II we show the power of λ that each error term is multiplied by during this scaling. We report T err := T 1 = T 2 in the main text instead of λ to make connection to parameters regularly reported in experimental works.