Analog nature of quantum adiabatic unstructured search

The quantum adiabatic unstructured search algorithm is one of only a handful of quantum adiabatic optimization algorithms to exhibit provable speedups over their classical counterparts. With no fault tolerance theorems to guarantee the resilience of such algorithms against errors, understanding the impact of imperfections on their performance is of both scientific and practical significance. We study the robustness of the algorithm against various types of imperfections: limited control over the interpolating schedule, Hamiltonian misspecification, and interactions with a thermal environment. We find that the unstructured search algorithm’s quadratic speedup is generally not robust to the presence of any one of the above non-idealities, and in some cases we find that it imposes unrealistic conditions on how the strength of these noise sources must scale to maintain the quadratic speedup.

To date, there is only a handful of QAO algorithms whose runtime is provably superior to their classical counterparts [28][29][30][31][32] 7 . First and foremost among these is the quantum adiabatic unstructured search (QAUS) algorithm-an oracular algorithm for identifying a marked state in an unstructured list. Originally devised by Roland and Cerf [28] (but see also [6,33] for earlier variants), the algorithm consists of encoding the search space in a 'problem Hamiltonian,' H p , that is constant across the entire search space except for one 'marked' configuration m m m m n 1 2 ñ = ¼ ñ | | whose cost is lower than the rest. Here, m i ä {0,1} are the bits of the n-bit marked configuration (the number of elements in the search space is thus N 2 n = where n is the number of elements). Similar to its gate-based counterpart, Grover's unstructured search algorithm [34], the runtime of the Roland and Cerf algorithm scales as O N ( ), which is to be contrasted with the linear scaling with N of the number of queries required classically for finding the marked item.
While the asymptotic scaling of the runtime of quantum adiabatic algorithms such as QAUS give an accounting of the 'time resources' used by the algorithm, one should be careful about not accounting for other resources, particularly precision, due to the analog nature of the algorithm [35][36][37]. Failing to do so has practical ramifications since any physical implementation of an analog algorithm is expected to have some fixed precision.
In this study, we examine the robustness of the QAUS algorithm to finite precision as exhibited by several noise models [38,39]. For completeness we also revisit the thermal robustness of the algorithm [40][41][42][43][44][45][46] using a specific decoherence model. While these forms of imperfection are expected to appear together in any physical implementation, we treat each type separately here. We find that the quadratic speedup of the QAUS algorithm is sensitive to both finite precision and thermal effects, requiring both precision and temperature to scale in physically unreasonable ways to maintain the quantum speedup. For the former, we do this using two forms of Hamiltonian implementation errors that shift the position of the minimum gap, and only with a precision that scales exponentially with the system size can the quadratic speedup be maintained.
While it is well accepted that scalable quantum computing is not possible without fault tolerance [47], there is as of yet no known accuracy-threshold theorem for the adiabatic paradigm of quantum computing. Therefore, while fault tolerance schemes can be applied to the gate-based approach for solving unstructured search [34], no equivalent schemes exist to date for the adiabatic approach. Our study therefore calls into question the practical significance of the QAUS asymptotic speedup in the absence of physically meaningful schemes to mitigate and correct for these errors. Specifically, if we are to rely on such speedups to give rise to a significant separation between the computational costs of quantum and classical algorithms at some maximum size, there is a significant engineering challenge to realize the necessary quantum system with a sufficiently high precision, a feat that becomes increasingly harder with growing system size.
We begin with a brief overview of the algorithm and then move on to discuss the various types of imperfections considered and their impact on performance. In the concluding section we discuss the meaning and implications of our results.

The QAUS algorithm
The unstructured search problem Hamiltonian is: where m m ñá | |is the projection onto the marked state, which belongs to the computational basis N 0 , 1 , , where we have assumed the boundary conditions s(0)=0 and s 1  = ( ) at the beginning and end of the interpolation respectively. While the efficiency of a generic adiabatic algorithm may depend sensitively on the form of H b [14,51], the one-dimensional projection above gives rise to the optimal scaling performance [28].
If the initial state is taken to be the ground state of H(0), i.e. +ñ | , then the evolution according to H(s) is restricted to the two-dimensional subspace spanned by mñ | and m i The ground state and first excited state of the system are in this subspace throughout the interpolation and can be written as: The remaining N−2 energy eigenstates are outside the aforementioned two-dimensional subspace and have energy 1 throughout the interpolation. For later convenience, we write them as: is the integer associated with the negation of the bit-representation of the integer f ( j) and This particular form of the excited states is useful because, , irrespective of the qubit index i and the state f ñ | . This then means that we have the following relations: The optimized annealing schedule of Roland and Cerf [28] that defines the QAUS algorithm satisfies a 'local' adiabatic condition [29,30]: where ò is a small constant. The optimized annealing schedule satisfying the interpolation boundary conditions is given by with the optimal runtime being [28] i.e. it is proportional to the square root of the dimension of the Hilbert space, similarly to its gate-based counterpart [34]. For a sufficiently small ò, this choice guarantees that a system prepared in the ground state of H(0) remains close to the instantaneous ground state throughout the evolution using H(s).

Finite schedule precision
The QAUS algorithm is analog in nature, in that it requires continuously varying the strengths of H b and H p throughout the evolution [28][29][30]52]. For the local adiabatic interpolation, equation (7), the annealing schedule s(t) changes exponentially slowly around the minimum gap, which is on the order of N 1 , in a region of width N 1 [53,54]. In any conceivable physical setting however, we expect only a limited control over the interpolating schedule, and here we ask whether this restriction adversely affect the performance of the QAUS algorithm.
We begin our exploration by specifying the schedule s(t) as a piecewise linear schedule between equally spaced time points t t 0, , , , 1 2  ¼ with t j =jΔt for different spacings Δ t such that the schedule at s(t j ) coincides with the original QAUS schedule given by equation (7) (see figure 1(a)). A numerical investigation reveals that a piecewise linear schedule with only two intermediate points (three-piece interpolation) suffices to achieve the quadratic speedup. This is demonstrated in figure 1(b), which depicts the probability of success P s , the probability of measuring the marked state at the end of the evolution, as a function of problem size n for threeand four-piece interpolations. The results show that already with a three-piece schedule and a total time given by equation (9), a constant (with system size) probability of success is achieved. Higher-piece interpolations give, as expected, higher success probabilities.
We thus find that the smooth s(t) schedule in equation (8) is not necessary for obtaining a quadratic speedup provided that the linear slope at the minimum gap, s=1/2, scales as N 1 . Since the region of the minimum gap shrinks exponentially as N 1 , this requires 'hitting' the location of the minimum gap with increasing precision as the problem size grows.
To illustrate the above point, we consider the scenario of a slightly shifted schedule s(t) that 'misses' the location of the minimum gap by a small but fixed amount. This is equivalent to the case where the Hamiltonian itself is slightly misspecified: where χ is a small fixed constant and the schedule s(t) is taken to be the unperturbed one (equation (8)). For the above Hamiltonian, the gap is minimal at ) . By employing the original QAUS annealing schedule, it is easy to see that there will always be a problem size n * beyond which the schedule does not sufficiently slow-down in the vicinity of the minimum gap. We confirm these expectations in figure 2 with simulation results for different values of displacements χ corresponding to displaced minimal gaps. Any nonzero value of χ (equivalently, any nonzero displacement of the minimum gap) eventually leads to an exponentially decreasing probability of success, with the transition to exponential behavior occurring at larger values of n for smaller displacements χ.
To have the quadratic speedup, a fixed success probability must be maintained for growing system size. The results in figure 2 show that to achieve this, the distance of s * from 1/2 must be decreased accordingly. We can ask how big a perturbation is allowed, or equivalently how many bits of precision are required, for the schedule to achieve this. Since the gap is small to within a width of N 1 , we expect to require approximately n/2 bits of precision. Thus the schedule must be precise to within O(n) bits of precision in order to maintain the quadratic speedup. This is confirmed by the numerical data in figure 2, where we see that for n=11, 13, 15, 17, we require approximately a factor of 2 decrease in the distance of s * from 1/2. We further discuss the feasibility of the increasing precision requirement in the concluding section.

Noisy Hamiltonian
The noise model in the previous section still restricted the unitary evolution to the two-dimensional subspace spanned by mñ | and m ñ | . We now extend our analysis by considering noise that prevents the evolution from being restricted to this subspace. Specially, we consider the QAUS algorithm perturbed by a noise Hamiltonian H such that the total Hamiltonian is now given by H s H s H ¢ = + ( ) ( )˜, where the noise Hamiltonian H has matrix elements in the N 0 ,..., 1 ñ -ñ | | basis that are drawn randomly from a Gaussian distribution with mean zero and standard deviation σ. Our model of noise has the elements fixed throughout the evolution, which is different from the time-dependent noise model studied in [38]. The adaptive step Runge-Kutta-Fehlberg algorithm was used for the efficient numerical solution of the time dependent Schrödinger equation [55,56]. Figure 3 shows the dependence of the probability of success P s on σ for various N values. The data can be fitted by The probability of success approaches N 1 in the large noise limit. In this limit the Hamiltonian is random so measuring the marked state occurs with probability 1/N. The initial exponential decay of P s is a function of N σ 2 . This means that for a constant noise strength σ, the probability of success decays exponentially with N, and the only way to mitigate it is to require that σ, the noise strength, scales as N 1 .

Interaction with a thermal bath
The robustness of the QAUS algorithm in the presence of interactions with an external environment has been extensively studied [40][41][42][43][44][45]. A generic interaction breaks the symmetry that restricts the system evolution to the lowest two eigenstates, and for completeness here we show how the exponential number of excited states within a constant energy gap places (unrealistic) requirements on the temperature (or overall energy scale of the Hamiltonian) to maintain performance even for possibly the most innocuous noise model [57]. We consider a model of decoherence between a quantum annealing system of qubits and a thermal environment described by the Markovian adiabatic master equation [58]. (We assume that this model holds throughout the anneal, even though we expect the validity conditions of the microscopic derivation of the model to break down near the minimum gap.) We focus on the case where each qubit is connected to its own independent bath of bosonic harmonic oscillators. The excitation rate from the ground state to an excited state s i e ñ | ( ) at any point s is generically given by R s ground state to the excited state and β is the inverse-temperature of the bosonic bath. γ(Δ i ) encodes the spectral density of the bosonic bath, the bath correlations, and the system-bath coupling strength g, and A α is the system operator part of the system-bath interaction. We consider A z s = a a corresponding to a 'dephasing' bath. For concreteness, we can consider an Ohmic spectral density, such that [58]. Of relevance to us is the excitation rate during the anneal from the instantaneous ground state to all the excited states outside the two-dimensional subspace, which is given by = . Hence, we expect that the excitation rate to the excited states outside the two-dimensional subspace for the first half of the anneal to scale as ng e 1 for large n. In conjunction with a total annealing time that scales as N (equation (9)), we can expect the open-system dynamics to not be restricted to the two-dimensional subspace for a constant temperature and system-bath coupling. We note that if the system thermalizes on the instantaneous Hamiltonian, the probability of success at any point in the anneal is given by For any fixed nonzero temperature, this gives a probability of being in the instantaneous ground state that scales as 1/N for any point along the interpolation. In order to suppress the excitations to outside the two-dimensional subspace during the anneal, we can scale the inverse temperature β linearly with n for a constant g, which will exponentially (in n) suppress thermal excitations out of the ground state and will ensure that the instantaneous thermal state always has a finite population on the instantaneous ground state. These results are consistent with the analysis of [40], although in that work the overall energy scale of the Hamiltonian E 0 was scaled linearly, such that β E 0 ∼n. Alternatively the system-bath coupling g can be scaled down at least as N −1/4 for a constant β in order to ensure that thermal excitations are suppressed during the entire evolution.

Conclusions and discussion
We studied the robustness of the QAUS algorithm against various types of imperfections from limited control over the adiabatic schedule to Hamiltonian misspecification to an interaction with a decohering bath. Our findings can be summarized as follows. In the presence of finite perturbations to the Hamiltonian, the probability of hitting the ). The horizontal dotted lines at 1/N are the asymptotic success probabilities for the various system sizes. marked state decreases exponentially with system size n if the interpolating schedule is not adjusted accordingly. This results in the loss of the quadratic speedup of the error-free algorithm. The scaling is similar when we consider a noise model that introduces Gaussian noise to the matrix elements, which now does not restrict the evolution to a twodimensional subspace: the probability of hitting the marked state now decreases exponentially in N for a fixed standard deviation of the noise. Our results indicate that the standard deviation must be scaled down as N 1 , which can also be derived from the analysis of [59] 8 . While neither of the above noise models have been constructed with a physical mechanism in mind, these noise models reproduce effects we expect to generically occur. We expect generic noise to break the symmetries of the Hamiltonian that restrict the evolution to a particular subspace, and we expect generic noise to shift the position of the minimum gap in a noise-instance dependent way. Our results show that without the interpolation schedule slowing down precisely at the noise-shifted minimum gap, the quadratic speedup of the QAUS algorithm will be lost.
We emphasize that even if the Hamiltonians H b and H p can be implemented precisely, the annealing schedule s(t) still needs to be controlled with exponential precision around the minimum gap, even if we use piece-wise linear interpolations. This need for growing precision must inevitably translate to the need of additional resources, without which the QAUS algorithm cannot retain its quadratic speedup. This is the signature of analog computing, and our results illustrate the need for alternative methods that would combat the exponentially growing precision requirement.
Our work also has some implications for algorithms that require access to continuous-query Hamiltonian oracles or query other properties of the Hamiltonian (e.g. [59,60]) wherein a sub-routine returning, e.g. the value of the gap, is called. Our work suggests that the value of the gap needs to be returned with growing precision as a function of system size and hence requires growing space resources that needs to be accounted for.
We also revisited the thermal stability of the algorithm, studying it in the framework of the weak-coupling Markovian adiabatic master equation. Here, in the absence of specific fine tuning, the presence of an exponential number of excited states at a fixed energy gap away from the ground state already imposes serious constraints on the temperature and/or the system-bath coupling just to ensure the evolution is restricted to the twodimensional subspace. The former needs to be scaled down at least inversely proportional to the system size, or the latter must be scaled down at least exponentially with the system size.
We finally point out that our analyses above is an asymptotic one. Since any physical device will have a finite fixed size, one could imagine noise strengths that are sufficiently reduced to make the QAUS algorithm successful. Such a device may still have practical uses if the computational costs of the quantum and classical algorithms are well seperated, and our results do not exclude such a possibility.