Subspace confinement: how good is your qubit?

. The basic operating element of standard quantum computation is the qubit, an isolated two-level system that can be accurately controlled, initialized and measured. However, the majority of proposed physical architectures for quantum computation are built from systems that contain much more complicated Hilbert space structures. Hence, deﬁning a qubit requires the identiﬁcation of an appropriate controllable two-dimensional sub-system. This prompts the obvious question of how well a qubit, thus deﬁned, is conﬁned to this subspace, and whether we can experimentally quantify the potential leakage into states outside the qubit subspace. We demonstrate how subspace leakage can be characterized using minimal theoretical assumptions by examining the Fourier spectrum of the oscillation experiment.


Introduction
The issue of subspace confinement for qubit systems is fundamental to the primary operating assumptions of quantum processors. The concepts of universality, quantum gate operations, algorithms, error correction and fault-tolerant computation hinge on the precept that the fundamental quantum system is an isolated, controllable, two-dimensional (2D) system (qubit).
It is well known that most of the physical realizations of qubits are in fact multi-level quantum systems, which can theoretically be confined to a 2D (qubit) subspace. Important examples range from super-conducting qubits [1]- [3] to atomic systems such as cavity-coupled color centers [4]- [6] and ion traps [7]. In the former systems, a qubit is generally defined as the subspace (of the full Hilbert space) spanned by the two lowest energy states in an arbitrarily shaped potential such as the washboard potential of current-biased Josephson junctions [8,9]. However, the potential number of valid quantum states within each well is not limited to two, and quantum gates, especially if sub-optimally implemented, may inadvertently populate other confined states. Hence a more stringent definition of a qubit would consist of a two-level quantum system with classical control confined to the unitary group SU (2).
The ability to initialize, operate and measure completely within the two-level subspace representing 'the qubit' is vital to the successful operation of any large scale device constructed from such quantum systems. Standard quantum error correction protocols (QEC) [10]- [12] generally assume that the qubit system is precisely confined to the two-level subspace and that all quantum gates operate only on the qubit degrees of freedom. If poor control or environmental influences inadvertently result in non-zero population of higher levels, leakage correction protocols are necessary.
The issue of subspace leakage in quantum processing has been addressed in depth from the standpoint of error correction. Work by Lidar [13,14] examined the construction 3 of leakage reduction units (LRUs), which use modified pulsing techniques to ensure that any unitary dynamics outside the qubit subspace can be compensated for which has been adapted specifically for super-conducting systems [15]. Another type of LRU uses quantum teleportation [16] to map a multi-level quantum state back to a freshly initialized two-level qubit. Finally, active detection such as non-demolition measurements (which detect population in non-qubit states without discriminating between the qubit states) can be performed on the system [17]- [20]. If an out-of-subspace detection event occurs the leaked qubit is re-initialized or replaced. The inclusion of LRUs based on teleportation has been investigated within the context of fault-tolerant computation [21] and shown that, in principle, the inclusion of leakage protection does not adversely affect large scale concatenated error correction.
Although these schemes are viable methods to detect and correct for improperly confined qubit dynamics, they can be cumbersome to implement and many systems admit, in principle, sufficiently confined Hamiltonian dynamics so that leakage could be expected to be heavily suppressed. For example, ion-trap qubits controlled by lasers, leakage to other ionic states can be made negligible by employing very finely tuned lasers and sufficiently long (and possibly optimally tailored) control pulses. Advances in qubit engineering may therefore allow us to eliminate or at least substantially reduce the need for laborious leakage detection/prevention schemes in many cases, provided that we can experimentally ascertain sufficiently high confinement of manufactured qubits under classically controlled Hamiltonian dynamics.
In this paper, we present a simple generic protocol to estimate qubit confinement, or more precisely, establish bounds on the subspace leakage rates, for 'quality control' purposes. The main goal is to allow us to empirically detect inferior qubits by using readily obtainable experimental data to derive tight bounds on the subspace leakage of the system. This protocol would represent one of the first steps towards full system characterization [22]- [25].
Section 2 briefly outlines the basic assumptions with respect to the measurement and control model and the motivation for the proposed protocol. Section 3 discusses the basic mathematical properties of qubit oscillation data and shows how a minimal amount of information obtained from the oscillation spectrum can be used to derive empirical bounds on the subspace leakage rate, and that these bounds are very tight for the high quality qubits required for practical quantum computation. In section 4, the effects of finite sampling are considered and studied using numerical simulations. Section 5 compares the efficiency of bounding confinement using the proposed scheme versus alternative approaches such as detection of imperfect confinement by identifying additional transition peaks within the Rabi spectrum. Finally, section 6 briefly examines the effects of decoherence.

Motivation and preliminaries
Estimation of qubit confinement represents one of the first major steps in full qubit characterization. Therefore, the protocol should not be predicated on the availability of sophisticated measurements or control, and should be amenable to automation so that it could be used in conjunction with a potentially automated qubit manufacturing process. The bounds on the subspace leakage will be based on the observable qubit evolution under an externally controlled driving Hamiltonian. We assume that our classical control switches on the single qubit dynamics and that the governing Hamiltonian is piecewise constant in time. Hence, the Hamiltonian induces the unitary operator U (t) = e −iH t , withh = 1.

4
Although this assumption may not be applicable to all systems, e.g. systems subject to ultra-fast tailored control pulses, it is not as restrictive as it might appear. It is generally valid for systems such as quantum dots or Josephson junctions subject to external potentials created by voltage gates if the gate voltages are (approximately) piecewise constant. It is also a good approximation for systems subject to time-dependent fields such as laser pulses in a regime where the rotating wave approximation (RWA) is valid and the pulse envelopes can be approximated by square-waves. In this case, the Hamiltonian relevant for our purposes is the (piecewise constant) RWA Hamiltonian determined by the amplitudes, detunings and possibly phases of the control pulses. This model can even be valid for other pulse shapes if the Hamiltonian is taken to be an average Hamiltonian describing the effective dynamics on a certain timescale (beyond which we do not resolve the time-dependent dynamics). However, the main focus of the paper is not when the dynamics of a system can be modeled in this way, but rather how to assess subspace confinement for systems where this model of the dynamics is valid.
Assuming the effective control-dependent Hamiltonian H = H [ f ] is constant for 0 t t k , where f is the classical 'control knob' parameter, the evolution during this time period is given by the unitary operator U (t) = e −iH t . Although H will generally depend on control inputs, we shall omit this dependence in the following for notational convenience. The driven system generally undergoes coherent oscillations, which are often referred to as Rabi oscillations, especially for optically driven systems in the RWA regime. Although our model is not limited to these systems, we shall use the terms coherent oscillations and Rabi oscillations interchangeably throughout this paper.
The measurement model assumed is crucial to the relevance of the protocol. Some standard measurement models in quantum computation assume the ability to detect both the |0 and |1 states independently (such as SET detectors in solid state designs [26]- [28]). In this case, estimating subspace leakage is fairly straightforward and requires only repeated measurement of the system while undergoing evolution. The leakage is simply given by the deviation of the cumulative probability of measuring |1 or |0 from unity. However, this measurement model is not realistic for the majority of proposed systems.
Color centers and ionic qubits use externally pumped transitions to discriminate between a light state (≡ |0 ) and other 'dark' states, while readout in super-conducting systems [29,30] involves lowering a potential barrier such that only one of the qubit states can leak to an external detection circuit. The measurement outcome of the indirectly probed state is inferred from the non-detection of the directly measured state and for such measurement models estimating confinement is more complicated. Hence this paper utilizes the latter model in order to quantify confinement. It should be noted that we are not considering the concept of weak measurement, in each case we assume that the measurement of the system causes a full POVM collapse of the wavefunction. We also assume that the measurement apparatus has been sufficiently characterized. In order to successfully implement computation, readout fidelity should ideally be of the same order as general systematic and decoherence errors. Therefore, characterization is initially required to ascertain the error rate associated with measurement which can then be incorporated into calculations of confinement.
Strong non-qubit transitions can still be identified directly via modulations in the Rabi oscillations data as shown in figure 1(b) for a three-state system evolving under the trial However, the Rabi oscillation data for the modified three-state Hamiltonian, depicted in figure 1(a) shows that an apparent lack of modulations in the Rabi oscillation data is not proof of perfect confinement, and that quantitative measures of confinement or subspace leakage and experimental protocols are needed.

Estimation of subspace leakage
By defining the projection operator on to a 2D subspace, = |0 0| + |1 1|, subspace leakage is given by, with ρ = U † (t)|0 0|U (t). Unfortunately, we cannot calculate directly without knowledge of the Hamiltonian. However, we can estimate subspace leakage experimentally from standard Rabi oscillation data.

Perfect confinement
Consider a general N -level system undergoing coherent evolution via a driving Hamiltonian H N in the closed system case of no environmental decoherence. If confinement under this 6 Hamiltonian is perfect, H N has a direct sum decomposition, where H 2×2 represents the control Hamiltonian confined to the qubit subspace, span{|0 , |1 }, the state |0 being defined by the measurement, and the excited state |1 by the allowed transition. For our measurement model the observed Rabi oscillations have the functional form As there is no coupling between the state |0 and states outside the H 2×2 subspace, we can expand where Conservation of probability (total population) thus implies (|c 0 | 2 + |c 1 | 2 ) 2 = |c 0 | 4 + |c 1 | 4 + 2|c 0 | 2 |c 1 | 2 = 1, and hence the heights of the two Fourier peaks for perfect confinement will satisfy the relation h 0 + 2h 0,

Imperfect confinement
If the system experiences leakage to states outside the qubit subspace then the corresponding control Hamiltonian H N can no longer be reduced to a direct sum representation (4) The Rabi data is now a linear superposition of multiple oscillations corresponding to different transitions of the N -level system and the corresponding peak heights in the Fourier spectrum can be expressed in terms of the expansion co-efficients, A|0 = N −1 a=0 c a |a , as, Conservation of probability leads to Imperfect confinement implies h 0 + 2h 0,1 < 1. We see from this analysis that the subspace leakage is determined by the cumulative amplitudes of all non-qubit states for a given eigenstate of H N , which can be calculated from all the peak heights in the Fourier spectrum, However, exact calculation of requires identification of all peaks in the Fourier spectrum and knowledge of which peak corresponds to each transition. It is, therefore desirable to derive bounds on the subspace leakage that only involve a few dominant and thus easily identifiable Fourier peaks.

Bounds on subspace leakage
We can derive upper and lower bounds on using only the heights of the primary spectral peaks h 0 and h 0,1 .
Provided a =0,1 |c a | 4 1, i.e. subspace leakage is reasonably small, we obtain a tight lower bound for as a function of only the two major peak heights: The upper bound for can also be calculated quite easily. Recall that Comparison with (11) thus immediately yields which can be solved for The other solution to equation (14) is invalid as a bound due to the asymptotic behavior of both the upper and lower bound lim Since the second term in (13) represents the heights of all the Fourier peaks not associated with the |0 ↔ |1 , |0 ↔ |a or |1 ↔ |a transitions, for |a = |1 . For a well confined system this is a very small correction to 2 , consequently the bound is again strong. Therefore, the subspace leakage is bounded above and below by Note that this double inequality involves only the two main peaks in the Fourier spectrum, i.e. we can bound the subspace leakage without determining the heights of all peaks. For the trial Hamiltonians (1) and (2) we obtain the following bounds 0.0497 while the actual values of H m and H n are  (21), characterized by a static coupling between the qubit states and a variable coupling γ to two higher levels. As γ → 0 the subspace leakage approaches 0 and the bounds for become more accurate.
In both cases the upper bound for equals the actual value of . This is due to the fact that both systems are of dimension three, and when estimating max( ) we neglected terms of the form which naturally vanish for a three-level system. Figure 2 shows how the bounds (17) for converge as confinement increases (γ → 0) for the test Hamiltonian,

Finite sampling Fourier analysis
The previous section details how quantitative bounds on the subspace leakage can be obtained, in principle, from the Fourier spectrum of the Rabi data. However, to translate this method into a viable experimental protocol, we need to consider the effects of finite sampling and taking the discrete Fourier transform (DFT), which raises several issues. First the Nyquist criterion for sampling [31] must be satisfied, i.e. to avoid aliasing, some rough estimate of the Rabi period T Rabi is needed to guarantee that at least two sample points are chosen per oscillation period, i.e. t T Rabi /2. The second issue that must be considered is the resolution of the Fourier spectrum. The frequency resolution ω is given by ω = 2π/t ob , with t ob the total observation time of the Rabi signal. If the control Hamiltonian induces a non-qubit transition with a frequency within ω of the primary peak then the DFT will combine the amplitudes for qubit and non-qubit transitions in the same frequency channel thus leading to an overestimate of h 0,1 and hence qubit confinement. To avoid such problems it is necessary to ensure that the total observation time t ob is long enough. Thus, some estimates of the system parameters are required, although these do not need to be very accurate and will generally be known on theoretical or experimental grounds.
Finally, the DFT has the property that a pure sinusoidal signal will approach a delta function if there is zero phase difference between the start and the end of the observed signal. If this phase matching condition is not met then all frequency peaks will broaden. Phase matching for system identification has already been addressed for the identification of single qubit control Hamiltonians in [25] and we will follow the same approach, which essentially involves truncating the Rabi oscillation data at progressively greater values of t ob such as to maximize the trial function where F(ω) represents the amplitude of the Fourier spectrum at frequency ω and ω p represents the frequency of the maximum Fourier peak. The value of t ob where P(t ob ) is maximized represents the cutoff time to the Rabi signal that produced the best phase matching for the DFT.
To simulate real experiments, we numerically propagate the initial state |0 , under the Hamiltonian H , by U (t k ) = exp(−it k H ) for discrete times t k = k t where k = 0, 1, . . . , K and K t = t ob . A single measurement at time t k is simulated by mapping the target state U (t k )|0 to {|0 , |1 }, where the probability of obtaining 0 is given by p 0 = | 0|U (t k )|0 | 2 ; the ensemble average at a single time t k is determined by dividing the number of zero results by the total number of repeat experiments N e . For the following numerical simulations we shall use the trial Hamiltonians and where H a represents a five-level system with a perfectly decoupled two-level subspace consisting of the two lowest energy states, while H b represents a five-level system with weak coupling between the qubit sub-manifold and two of the upper levels. We only consider Hamiltonians that have couplings between the |0 state and higher levels, as this state is fixed by the measurement basis. We are, therefore free to diagonalize the lower block of the Hamiltonian, which also helps to simplify the comparison between different systems. The out-of-subspace coupling in H b was chosen such that the leakage from the qubit subspace ≈ 7 × 10 −4 is small (too small to cause noticeable modulations in the Rabi oscillations) yet significant (in fact above certain critical thresholds) for quantum computing applications. The part of the Hamiltonian governing the qubit dynamics was chosen arbitrarily and is common to all the Hamiltonians examined within this paper to maintain consistency between different simulations. The accuracy of the protocol is not affected by the choice of single qubit dynamics.

Estimating uncertainty in leakage bounds
Estimating uncertainties in the bounds for is crucial since for the majority of qubit systems it will be practically impossible to prove that the evolution of the system under a given Hamiltonian is completely confined to the SU (2) subspace, i.e. = 0. Instead, in practice it is sufficient for quality control purposes to experimentally confirm that the leakage from the qubit subspace is below a threshold value where it can effectively be ignored, i.e., it is the upper bound max( ) that is relevant. The accuracy of our estimate for max( ) will be primarily limited by our ability to accurately determine the main peak heights h 0 and h 0,1 due to projection noise induced by the DFT.
Quantifying this uncertainty is relatively straightforward. Defining the noise function ν(ω) of the Fourier spectrum to be the amplitude ν(ω) of each Fourier channel excluding h 0 = F(0) and h 0,1 = F(ω p ), the uncertainty in h 0 and h 0,1 is given by the standard deviation of the noise function δh = sd[ν(ω)]. From this, we can derive the uncertainty associated with max( ) ≡ u .
δ u can be reduced by increasing the number of ensemble measurements N e taken at each point in the Rabi cycle. Figures 3 and 4 show how the estimate for u converges as N e is increased for the Hamiltonians (3) and (4), respectively. It should be noted that u 0, hence for each plot the lower error bars should only extend to the zero point, but keeping the error bars symmetrical around the data point makes the convergence behavior clearer. For large values of N e , u converges to zero for the perfectly confined system governed by H a but the nonzero value ≈ 7 × 10 −4 for the imperfectly confined system described by H b . The respective observation times for each Hamiltonian were chosen to be t ob = 30 T Rabi to ensure that all peaks are resolved, i.e. there are no contributions from additional transitions present within ω of the primary peak.

Numerical tests of error bound accuracy
To test the overall accuracy of the uncertainty estimates for u we can expect to obtain from realistic Rabi oscillation data, we calculated the distance between the simulated value, u , and the analytical value, u , calculated directly from the Hamiltonian using equation 15 as,    Next we examined how the protocol behaves when simulating a large number of randomly selected multi-level Hamiltonians. For these simulations we choose N -level Hamiltonians of the form with {E k } ≡ {0, 1, 1.5, 2, 2.4, 2.5, 2.9, 3, 3.3, 4}. The vector a = [a 2 , . . . , a 9 ] was then chosen at random in two stages. First the dimensionality of a is randomly selected, allowing the Hamiltonian to coherently drive any multi-level system, N ∈ [2, 3, . . . , 10]. The non-zero coupling values were then randomly assigned such that each element of a was approximately two orders of magnitude less than the qubit coupling term to ensure that all of the multi-level systems had high confinement. We randomly generated 5000 of these Hamiltonians and d(H k ) = | u (H k ) − u (H k )| was calculated. The average (analytical) value of u (H k ) for these 5000 trial Hamiltonians was found to be u (H k ) = 1.68 × 10 −4 . We then examined the ratio, indicating the percentage of successful estimates of the subspace leakage within 3σ . This ratio was calculated to be R = 99.9%, with the confinement estimates being outside the error bounds for only three of the randomly generated Hamiltonians. These results are consistent with the expectation that approximately 99.7% of the data should lie within 3σ of the mean and demonstrates that our methodology for characterizing subspace leakage can indeed be expected to yield accurate upper bounds on the subspace leakage in the vast majority of cases.

Efficiency of the protocol
The protocol presented in the previous section allows us to determine quantitative bounds on the subspace leakage for imperfect qubits by determining only the main peaks in the Fourier spectrum. An alternative strategy is to try to identify all peaks in the Fourier spectrum. The presence of any peaks in addition to the two main peaks is indicative of subspace leakage and a quantitative estimate of the leakage rate can be obtained by determining the heights of the additional peaks. Both approaches have potential advantages and disadvantages. The former approach requires only the identification of the two main peaks but these need to be clearly resolved and the peak heights determined with high precision. The latter approach does not require precise estimates of peak heights but relies on the detection of additional peaks, which for high confinement will be much smaller than the major peaks, and are likely to be difficult to discriminate from the noise floor. This raises the question which strategy is more efficient to decide if the subspace leakage for a given qubit is below a certain error threshold.
To answer this question, we performed a series of numerical simulations comparing the total number of measurements required to ascertain that the lower bound on the leakage rate l = 1 − h 0 + 2h 0,1 0 within error bounds, versus identifying a statistically significant third peak in the Fourier spectrum, indicating an out-of-subspace transition, for various trial Hamiltonians. For the purpose of the simulations we consider the following trial Hamiltonians representing a system with a variable coupling γ to a third level, as well as the four-level system governed by the Hamiltonian (21) and a six-level system governed by representing systems with variable but equal coupling to between one and four out-of-subspace levels, respectively. The lower bound, l , is taken to be non-zero for a discrete dataset, if the analytical value l of the lower bound calculated directly from the Hamiltonian exceeds six times the uncertainty, δ( l ), for the discrete data calculated from the simulated Fourier spectrum, i.e.
Six times the uncertainty in l represents the total distance between the maximum and minimum possible value of l (using a 3σ upper and lower confidence bound) and this interval should be smaller than the analytical value, l . A peak F(ω ) in the discrete Fourier spectrum is taken to be significant if it is more than three standard deviations δh = sd[ν(ω)] above the projection noise floorν(ω), i.e.
This definition will underestimate the number of ensemble measurements required slightly as it only represents the point where the third peak is greater than at least 99.7% of the noise channels.
For the simulations a range of out-of-subspace coupling strengths γ was chosen for each of the trial Hamiltonians (21), (29) and (30), and the corresponding subspace leakage rate as well as the analytical lower bound l computed. For each of the Hamiltonians, we then simulated experimental Rabi data and computed the discrete Fourier spectrum. The observation time in all cases was 30 Rabi cycles. The number of ensemble measurements for the Rabi data simulations was gradually increased until a statistically significant third peak was found (32), or (31) was satisfied, respectively. Figure 7 shows the number of ensemble measurements N e necessary to conclude that the system is imperfect in the sense that leakage is statistically significant for the three-level system governed by (29) for both methods. The horizontal axis represents the analytical value of confinement (γ ). Both curves scale roughly 1/ √ N e , which is consistent with the scaling of the projection noise, and hence the errors associated with estimating l and detecting a statistically significant third peak. For the three-level system, it is clear that confirming imperfect confinement by verifying (31) requires more ensemble measurements than detecting a third Number of ensemble measurements required to ascertain statistically significant subspace leakage (imperfect confinement) for the three-level system governed by (27) as a function of the (analytically calculated) confinement using the confinement equations (29) and by directly identifying the third transition peak.
peak according to (30). This is not too surprising since for a three-level system there is only one additional transition |0 ↔ |2 , and from the derivations of the confinement equations (9) we have, i.e. there is a conservation law for the cumulative sum of all the peak heights. Hence, if the number of possible additional peaks is small, then for a given level of confinement, the additional peaks will be greater, and thus easier to detect, than for a system with weak coupling to a large number of out-of-subspace levels, and hence many small transition peaks. We therefore conjecture that estimating subspace leakage using (31) will become preferable for a system with coupling to multiple out-of-subspace levels. The results of numerical simulations for the Hamiltoniansn (21) and (30), shown in figure 8 support this conjecture. We observe the same general scaling behavior as for the three-level system. For the four-level system it is clear that although searching for the additional transition peak is still somewhat more efficient, the difference between both the methods is small. For the six-level the curves have swapped position, i.e., using the confinement equations has become a more efficient way to ascertain statistically significant subspace leakage.
In appendix A, we have included simulations for similar Hamiltonians up to ten levels to show the effective cross-over of the curves and how the efficiency difference between the two methods increases with the number of additional levels. Note that for all the simulations we have endeavored to look at approximately the same range of subspace leakage. From these simulations it is clear that searching for the third peak in the Fourier spectrum is only really beneficial for systems with at most one extra transition. Hence, the proposed method for estimating subspace leakage will be more efficient than obvious alternatives in most cases.

The effect of decoherence
It is well known that even if subspace leakage is theoretically suppressed for an arbitrary control field, it is unlikely that decoherence will also be suppressed. Hence, we need to examine if the proposed confinement protocol will still be effective in the open system case when a qubit is subject to decoherence, possibly of the same order, or greater, than subspace leakage.
The study of arbitrary decoherence for N -level systems is a lengthy discussion, including Markovian and possible non-Markovian processes. Even for the simpler case of Markovian decoherence we would need to consider the complete N -level decoherence model with all the associated restrictions of completely positive maps [32]. Hence, we will instead only focus on a restricted case to show that, for a simple example, decoherence does not invalidate the protocol. It should be stressed that this only represents a preliminary analysis under a specific model of decoherence. Further work will involve investigating more complicated and system-specific decoherence effects such as N -level dephasing and spontaneous emission as well as possible system specific non-Markovian decoherence. However, due to the extremely complicated nature of such an analysis, we will limit our discussion to a specific case.
We consider a perfectly confined qubit which undergoes Markovian decoherence and hence can be described by the quantum Liouville equation where, L k [ρ] = ([L k , ρ L † k ] + [L k ρ, L † k ])/2, H represents the single qubit control Hamiltonian, and L k are the Lindblad quantum jump operators, which describe the effect of the environment on the system, each parameterized by some rate k 0.
For a basic decoherence analysis we restrict the Lindblad operators to the Pauli set, {L k } = {X, Y, Z }, and consider a perfectly confined, control Hamiltonian of the form This decoherence model is sufficient to describe pure dephasing, as well as symmetric population relaxation processes in any basis, although not asymmetric relaxation processes. Including each Pauli Lindblad term with an associated decoherence rate eliminates the problem of a preferential basis for qubit decoherence since any basis change of the overall system will only act to change the form of the Hamiltonian. We can solve the master equation under this model by using the Bloch vector formalism. Expressing the density matrix as ρ(t) The where α = 2( y + z + cos 2 (θ )( x − z )), β = x (1 + sin 2 (θ)) + y + z (2 − sin 2 (θ)) and h 0 contains a δ(ω) offset due to the fact we are measuring the observable P 0 . In order to describe how the maximum peak of each Lorentzian varies with we integrate h 0 and h 0,1 around an interval η of the peak height Hence, under decoherence the peak heights in the Fourier spectrum vary as a function of the integration window η and the decoherence rates α,β . This is consistent since as α,β → 0, both arctan functions approach π/2 and h 0 + 2h 0,1 = 1. The integration window η is analogous to frequency resolution of the Fourier transform ω, while the total area of the Lorentzian is equal to the peak heights when x,y,z = 0. Hence for small x,y,z , we can simply choose the resolution of the Fourier transform such that the entire Lorentzian is essentially contained within the data channel of the primary peak.
Consider the case where we wish to ensure that the subspace leakage does not exceed ζ . Using the upper bound for the subspace leakage (15) we have, assuming that the integration interval is approximately equal to the frequency resolution of the DFT (i.e. η ≈ ω) Here the last line assumes that α ≈ β = . When the Rabi frequency is much greater than the inverse of the decoherence rate (as necessary for any qubit realistically considered for quantum information processing), then the entire Lorentzian broadening caused by decoherence will be contained within one frequency channel. Thus, equation (39) allows us to calculate the maximum frequency resolution of the Fourier transform for successful leakage estimation using our protocol. For example, if ≈ 10 −4 s −1 and we wish to confirm that the subspace leakage is at most max = 10 −8 , then the resolution of the Fourier transform cannot exceed f ≈ 250 Hz if only the primary peak channels are used. Obviously, this restriction on the frequency resolution can be lifted by including multiple channels around the central peak when estimating the peak area. Although the decoherence model considered is not the most general possible case for an imperfectly confined control Hamiltonian, this calculation demonstrates that the effect of decoherence does not void the protocol for estimating subspace leakage for a common decoherence model. A more detailed analysis considering a full N -level decoherence model, including the effect of spontaneous emission and absorption processes and the possibility of system-specific non-Markovian decoherence is desirable but beyond the scope of the current paper.

Conclusions
We have introduced an intrinsic protocol for 'quantifying' the degree of subspace leakage for a realistic 'qubit' system. The protocol relies on very minimal theoretical assumptions regarding qubit structure and control, and utilizes a measurement model that is restrictive but extremely common to a wide range of qubit systems. We have introduced a quantitative measure of subspace leakage, and shown that the discretization noise as a result of finite sampling does not limit the ability of the protocol to quantify (with appropriate error/confidence bounds) the subspace leakage for well-confined (near perfect) qubits.
The ability to experimentally characterize subspace leakage to a high degree of accuracy using automated, system independent methods, which rely on the intrinsic control and measurement apparatus of the quantum device (required for standard quantum information processing) will be vital for the commercial success of quantum nano-technology. This protocol represents one of the first steps in a general library of characterization techniques that will be required as 'quality control' protocols once mass manufacturing of qubit systems becomes common.
Although, in this discussion, the qubit state |1 is only defined through the strongest transition it should be emphasized that if confinement estimates are made on multiple control fields (for example, two separate Hamiltonians which induce orthogonal axis rotations), the computational |1 state must be common for both Hamiltonians. This is not a significant problem, since for well engineered qubits, the computational |1 state will be known on theoretical grounds.
There are many open problems including subspace leakage estimates for systems undergoing a whole range of potential decoherence processes, quantifying confinement for multi-qubit control Hamiltonians and combining these schemes with other proposed methods for system characterization. Hopefully, in the near future, a complete set of characterization protocols will be developed which will augment large scale manufacturing techniques, allowing for efficient and speedy transition of quantum technology from the physics laboratory to the commercial sector. Number of ensemble measurements required to ascertain significant subspace leakage (imperfect confinement) for the five-level system (a) and the eight-level system (b) using the confinement equations and identifying a third peak. Number of ensemble measurements required to ascertain significant subspace leakage (imperfect confinement) for the ten-level system using the confinement equations and identifying a third peak. The initial condition S(0) = (0, 0, 1/2) T thus gives, F T [z(t)] = − c 2 d 2 + (2 x + 2 z + iω)(2 y + 2 z + iω) 2(c 2 d 2 + (2 x + 2 z + iω)(2 y + 2 z + iω)) ×(−2( x + y ) − iω) − 2d 2 s 2 (2 y + 2 z + iω) , where c = cos(θ ) and s = sin(θ ). The subsequent expansions are too lengthy to include here, however standard symbolic toolkits such as Mathematica can handle such expressions. The first step is to consider only the real component of F T [z(t)]. Next, the denominator is expanded to second-order around ω = 0 or ω = ±d. After this, we expand the numerator and denominator, neglecting all terms of the form x,y,z /d and smaller, assuming x,y,z d and being careful to note that for expansions around ω = ±d, we must keep terms of the form ω x,y,z /d. After simplifying the expressions, we find where α = 2( y + z + cos 2 (θ )( x − z )) and β = x (1 + sin 2 (θ)) + y + z (2 − sin 2 (θ )).