Exploring adiabatic quantum trajectories via optimal control

Adiabatic quantum computation employs a slow change of a time-dependent control function (or functions) to interpolate between an initial and final Hamiltonian, which helps to keep the system in the instantaneous ground state. When the evolution time is finite, the degree of adiabaticity (quantified in this work as the average ground-state population during evolution) depends on the particulars of a dynamic trajectory associated with a given set of control functions. We use quantum optimal control theory with a composite objective functional to numerically search for controls that achieve the target final state with a high fidelity while simultaneously maximizing the degree of adiabaticity. Exploring properties of optimal adiabatic trajectories in model systems elucidates the dynamic mechanisms that suppress unwanted excitations from the ground state. Specifically, we discover that the use of multiple control functions makes it possible to access a rich set of dynamic trajectories, some of which attain a significantly improved performance (in terms of both fidelity and adiabaticity) through the increase of the energy gap during most of the evolution time.


Introduction
Many important, computationally challenging problems in combinatorial optimization can be solved by determining the ground state of some quantum Hamiltonian. These Hamiltonians, however, often possess spin glass structure which severely restricts the performance of thermal annealing strategies [1]. Adiabatic quantum computation (AQC) [2,3], on the other hand, offers a route to producing the target ground state by slowly (adiabatically) changing the system's Hamiltonian H(t) from some initial one, whose ground state is easily prepared at t = 0, to the final one, whose ground state encodes the solution to the problem. The quantum adiabatic theorem [4,5] guarantees that, so long as the Hamiltonian changes sufficiently slowly and the system is not subjected to external perturbations, the final state of the system will be the ground state of the problem Hamiltonian.
By virtue of keeping the system in the instantaneous ground state of a slowly varying Hamiltonian, AQC acquires an inherent robustness to several sources of noise, such as dephasing and relaxation in the energy eigenbasis [6], which are known to plague the standard circuit model of quantum computation [7]. This observation has lead to much speculation that AQC may be performed with far fewer resources than circuit model computing, for which fault tolerance demands exorbitant overheads [8]. However, AQC's inherent robustness does not guarantee full fault tolerance, since there still exist some types of noise (in particular, single-qubit noise), which can drive undesirable transitions out of the ground state [6,[9][10][11]. Therefore, there has been much interest recently in developing error suppression and correction methods for AQC [12][13][14][15][16]. Unfortunately, a recent study found [15] that error suppression techniques based on quantum stabilizer codes [12][13][14] are unlikely to be sufficient for fault-tolerant AQC. Additionally, performing effective error correction in AQC requires implementing high-weight Hamiltonian terms [15].
A simple alternative approach to error suppression is to perform a computation faster, thus reducing the time over which errors can accumulate. However, how can faster evolution be reconciled with the requirement of adiabaticity? A variety of methods commonly known as shortcuts to adiabaticity (see [17] for a recent review) aim at speeding up the transition to the target state, but at the cost of not preserving the instantaneous ground state population at intermediate times, thus forfeiting the inherent protection which is the fundamental feature of AQC. But does a decrease in evolution time necessarily have to result in a loss of population from the ground state? In this paper, we use quantum optimal control theory (QOCT) [18][19][20][21][22] to explore the trade-off between the objectives of decreasing the evolution time and minimizing the population loss from the instantaneous ground state. Specifically, we consider situations where a Hamiltonian can contain multiple control functions that are allowed to vary freely in time. Then, given a finite evolution time, QOCT is used to identify control functions that maximize an objective composed of a weighted sum of two terms: one is the target-state fidelity and the other is the average ground-state population during evolution. Previous works considered different random trajectories leading to the problem Hamiltonian [23,24], trajectories defined as geodesics on a manifold [25], and the use of optimal control for speeding up quantum adiabatic evolution [26][27][28][29]. However, the present work is the first systematic application of QOCT to the study of adiabatic quantum trajectories.
According to the adiabatic approximation [30], the particulars of a dynamic trajectory (associated with a given set of control functions) affect the ground-state population in two ways. First, the value of the gap between the instantaneous ground state and the rest of the spectrum depends on the controls. Second, the matrix element of ∂H/∂t depends on both the controls and their time derivatives. In general, a numerical search is required to find a control set, which implements the desired Hamiltonian interpolation in a limited time while simultaneously maximizing the degree of adiabaticity. We perform these searches, using a gradient-based optimization algorithm, for two AQC problems with a Landau-Zener type Hamiltonian. Optimization runs start from various initial control sets, including functions which are solutions to the adiabatic condition. We find that the quality of the obtained optimal control solution strongly depends on the choice of the initial set, indicating that the composite control objective has multiple local optima. The obtained results demonstrate that the richness of the dynamics accessible via the application of multiple controls makes it possible to increase the energy gap at intermediate times and thereby improve both fidelity and adiabaticity, as compared to the standard approach with a single interpolation function.
Since QOCT requires a full propagation of the system's evolution, its direct applicability is restricted to numerical studies of AQC models with a small number of qubits. However, exploring optimal AQC trajectories in model systems provides useful insights into the mechanisms that help to maintain a high degree of adiabaticity for limited evolution times. In addition, although we couch our discussion in the context of AQC, our main result -that increasing the number of control handles in the Hamiltonian enables to preserve larger ground state populations in shorter evolution times -is more widely applicable to all situations that employ adiabatic evolution to produce transformations of the ground state.

Adiabatic quantum evolution
We consider a finite-dimensional closed quantum system, whose state |ψ(t) satisfies the Schrödinger equation ( = 1): where H(t) is the time-dependent Hamiltonian of the system. It is often convenient to use the time-evolution operator U (t) ≡ U (t, 0), which is defined by |ψ(t) = U (t)|ψ(0) and satisfies d dt where I is the identity operator. For a system of n qubits, the Hilbert space dimension is N = 2 n , and U (t) ∈ U(N ) (or, U (t) ∈ SU(N ) for a traceless Hamiltonian). A computation is performed by evolving the system over a finite time interval, t ∈ [0, T ]. The evolution operator at the final time T is denoted as U T ≡ U (T ). The instantaneous eigenstates and eigenenergies of the Hamiltonian H(t) are defined by AQC is performed by initializing the system in the ground state |φ = |φ 0 (T ) encodes the solution to the computational problem. In the standard formulation of AQC [2,3], the Hamiltonian has the form: where the scaled time s = t/T ∈ [0, 1] plays the role of the interpolation parameter. It is well known [1,26] that the interpolation rate does not have to be constant. More generally, the Hamiltonian can be expressed as where u(t) is the interpolation function which satisfies the boundary conditions: u(0) = 0, u(T ) = 1. Furthermore, controls on H i and H f do not need to be linearly dependent, and a more general form of the Hamiltonian is where the two interpolation (control) functions satisfy the boundary conditions: u i (0) = 1, u f (0) = 0, u i (T ) = 0, u f (T ) = 1. These functions can be non-linearly dependent or even independent of each other. Finally, the most general form of the Hamiltonian is where

Adiabatic approximation
If the system is initially prepared in the ground state |φ 0 (0) and the Hamiltonian varies slowly, the perturbation theory approximates the probability of the transition from |φ 0 (0) to |φ m (t) (m = 0) at time t as [30] P 0→m (t) = | φ m (t)|U (t)|φ 0 (0) Therefore, the system remains in the instantaneous ground state |φ 0 (t) at all times provided that the rate of Hamiltonian change is sufficiently small, i.e. the condition is satisfied ∀m = 0. This statement is the original formulation of the adiabatic theorem [4,5] (more rigorous formulations of the adiabatic theorem have been given in more recent works [31][32][33]; see also recent works discussing the validity of the adiabatic approximation and its bounds [34][35][36][37][38][39]). Under a reasonable assumption that the highest transition probability is to the first excited state, the requirement that P 0→1 (t) ≤ 4 2 , where 1 is a constant, corresponds to the adiabatic condition: where g(t) = E 1 (t) − E 0 (t) is the energy gap between the ground state and the first excited state. In the case where only one independent interpolation function is used, it is possible to solve for the unique u(t) that satisfies the equality in (10) with the boundary conditions u(0) = 0, u(T ) = 1. Specifically, the equality in (10) specifies a first-order differential equation for u(t), whose solution depends on two parameters: the integration constant and the product T ; the initial and final conditions u(0) = 0 and u(T ) = 1 determine the values of the integration constant and T , respectively (see section 4 below for detailed examples). However, if multiple independent interpolation functions are used, there exists an infinite number of solutions that satisfy the equality in (10), since there is only one differential equation for multiple functions. For example, distinct solutions are obtained when different constraints are imposed on the functions u i (t) and u f (t), which eliminate one of them from (10) (e.g. such a constraint can be a fixed form for one of the two functions or a relationship between them). We will use some of these solutions as initial control sets to start numerical searches in QOCT (see sections 4 and 5).

Quantum optimal control theory
The formulation of a quantum control problem necessarily includes the definition of a quantitative control objective (also called cost). The objective is a functional of the controls: . A general class of objective functionals can be written as [18,22] where F is a continuously differentiable function on U(N ), and G is a continuously differentiable function on U(N ) × R K . Usually, the first term in (11) (referred to as the final-time objective) represents the main physical goal, while the second term (referred to as the tracking objective) is used to incorporate various constraints on the dynamics and control fields. The optimal control problem may be stated as the search for subject to the dynamical constraint (2). For the sake of consistency, we will consider only maximization of cost functionals; any control problem can be easily reformulated from minimization to maximization and vice versa by changing the sign of the functional. We will denote an optimal control set, which maximizes the objective, as u (·), so that J[u (·)] = J . There are several commonly used types of the final-time objective F (U T ), depending on the specific quantum control problem [18,19]. In particular, for state-transition control, which is relevant for AQC, the goal is to maximize the target-state fidelity defined as the probability of transition between the initial state |ψ i and the final (target) state |ψ f , i.e.
3. Formulating optimal control theory for adiabatic quantum computation In order to formulate QOCT for AQC, we need to define an objective functional. The main physical goal is to drive the system from the initial state |φ Additionally, the defining feature of the AQC approach is that evolution should be sufficiently slow to keep the system in the instantaneous ground state at all times. If we employ only the final-state objective F (U T ) of (14), the adiabaticity will not be guaranteed. Therefore, the total objective functional should be of the form (11), with the tracking objective (to be denoted as J t ) serving to ensure that the evolution is adiabatic. In general, there exist various possibilities for formulating the tracking objective for AQC. The approach that we focus on in this paper is to maximize the population of the instantaneous ground state averaged over the duration of evolution [40], i.e. Here, is the instantaneous ground-state population, P 0 denotes the average ground-state population, and α > 0 is a positive weight factor that determines the relative importance of the finaltime and tracking objectives (the value of α is selected by trial and error based on numerical optimization results). The choice of the tracking objective J t of the form (15) corresponds to the goal of protecting the system against types of noise that affect excited states, including dephasing in the energy eigenbasis and leakage from the computational space. Two other possible choices of the tracking objective J t are discussed in Appendix A. Various algorithms can be used to search for an optimal control solution [18][19][20][21]. In QOCT, most numerical optimizations employ gradient-based methods [41][42][43][44][45][46][47][48][49][50][51][52][53] (second-order methods use, in addition to the gradient, the Hessian matrix, see Appendix B). In order to implement such an optimization, one needs to compute the functional derivative of the objective with respect to each control field: .
The functional derivative of the final-time objective F (U T ) of (14) is given by (using the chain rule) In what follows, we assume that the Hamiltonian is linear in the controls, i.e.
where A 0 is the field-free part of the Hamiltonian and {A k } are the operators through which the system couples to the control fields. Since H(t) is Hermitian and {u k (t)} are real, all operators {A 0 , A k } are Hermitian. If the Hamiltonian is of the form (19) and control fields are continuous functions of time, the functional derivative of the evolution operator with respect to each control field can be expressed as [54,55] and, in particular, for the final-time evolution operator, where is the kth coupling operator in the Heisenberg picture at time t. In numerical simulations, time is discretized, and control fields are approximated as piecewise-constant functions of time. In such a case, results (20) and (21) are approximations, albeit very good ones provided that the time step is sufficiently small. ‡ By substituting (21) into (18), we obtain Using (23), we can compute the functional derivative of F without resorting to a finite difference method. Next, we compute the functional derivative of the tracking objective J t of (15): ‡ If needed, one can use a more accurate numerical method that expresses U (t) as a time-ordered product of one-step propagators to compute its derivative with respect to the field value at every time step [56].
The instantaneous ground state |φ 0 (t ) depends only on field values at time t , i.e.
We use (25) and the notation along with the expression (20) for the functional derivative of the evolution operator, to transform (24) into the following form: While no general expressions exist for |φ 0 (t) and |χ k (t) , they can be derived for a particular Hamiltonian model. Specifically, consider a one-qubit Hamiltonian of the form: where x(t) and z(t) are the controls (real-valued functions of time), σ x and σ z are the Pauli matrices. It is straightforward to obtain where is half of the energy gap g(t) between the ground and excited states. The use of the composite objective functional of the form J = F + J t , along with a gradient-based optimization method, facilitates simple and numerically efficient searches. However, this approach has its drawbacks. It is known [57] that QOCT methods employing such composite costs are typically incapable of identifying the genuine Pareto front that quantifies the trade-off between the individual objectives. Therefore, searches aimed at maximizing J = F + J t are likely to converge to solutions that underestimate the best achievable values for both the fidelity and the ground-state population. Such underperforming solutions are, in fact, local optima of J, which act as traps for gradient-based searches. If only the final-time objective is used (i.e. J = F ), then, assuming satisfaction of a few reasonable physical conditions, the control landscape J = J[u(·)] is free of local traps [18], and therefore gradient-based methods are highly effective for such types of QOCT problems [45,46]. However, composite costs of the form J = F + J t often do possess local optima, due to the impossibility to simultaneously maximize both competing objectives. For optimizations involving multiple objectives, global search methods (e.g. evolutionary strategies or genetic algorithms) can be useful [57][58][59][60][61][62][63][64]. Still, the simplicity and efficiency of our approach make it a relevant starting point for exploring optimal AQC trajectories.

AQC problems and initial control sets
In order to numerically investigate the performance of the QOCT formalism presented in section 3, we consider two simple AQC problems (defined by a selection of H i and H f ), which both correspond to the one-qubit Hamiltonian (28). These two problems are presented in sections 4.1 and 4.2 below.
As mentioned above, the control landscape corresponding to the composite objective J = F + J t is expected to possess multiple local optima, which will trap gradient-based searches starting from various initial controls. Therefore, selecting an initial control set that results in a good optimal solution (ideally, a globally optimal one) becomes a part of the optimal control problem. One approach that we explore is initializing the searches at controls that satisfy the equality in the adiabatic condition (10). For the one-qubit Hamiltonian (28), we use (29) to recast the equality in the adiabatic condition (10) into the form: If x(t) and z(t) are independent functions, then, in general, equation (32) has an infinite number of solutions. We sample a few interesting solutions of (32), obtained by selecting a constraint that eliminates one of the two functions from the equation.
The baseline choice for the initial control set is the linear interpolation for both functions: The energy gap corresponding to this control set is g(s) = 2 √ 1 + s 2 , i.e. it monotonically increases from g(0) = 2 to g(1) = 2 √ 2. Another possibility is to initialize the search at a control set {x(t), z(t)}, which is a solution of equation (32). Since an infinite number of solutions exist, we only sample a few choices corresponding to various constraints that eliminate one of the two control functions. For problem (I), we use constraints that fix x(t) to a particular functional form, which is then substituted into (32) to yield a differential equation for z(t). One example is the constraint that x(t) is constant: x(s) = 1, ∀s ∈ [0, 1]. With this constraint, (32) reduces to where λ = 4 T . By solving equation (35), we obtain z(s) = λs/ √ 1 − λ 2 s 2 + C, where C is the integration constant. The boundary conditions (33b) determine the values of the parameters C and λ. The initial condition z(s = 0) = 0 gives C = 0. The final condition z(s = 1) = 1 takes the form λ/ √ 1 − λ 2 = 1, which gives λ = 1/ √ 2 (or, equivalently, T = 1/ √ 32). Using these values, we obtain the control set: with R(s) = = ( √ 32T ) −1 . The energy gap corresponding to this control set is i.e. it also monotonically increases from g(0) = 2 to g(1) = 2 √ 2. Another possible choice is a functional form which satisfies x(t) ≥ 1, with the purpose of increasing the gap that varies with time according to (31). For example, we select: x(s) = 1 + sin(πs). With this constraint, (32) reduces to By solving equation (38) with the boundary conditions (33b), we obtain the control set: x(s) = 1 + sin(πs), z(s) = [1 + sin(πs)][1 + πs − cos(πs)] with R(s) = = π 4 √ 2(π + 2)T −1 . The energy gap corresponding to this control set is which reaches its maximum g max ≈ 4.34441 at s ≈ 0.59023.

Problem (II):
Next, consider an AQC problem with H i = σ x and H f = σ z . The corresponding boundary conditions on the control functions are The baseline choice for the initial control set is again the linear interpolation for both functions: The energy gap corresponding to this control set is g(s) = 2 √ 1 − 2s + 2s 2 , i.e. the gap is largest at the ends of the time interval: g(0) = g(1) = 2, and smallest in the middle: Once again, we consider the possibility to initialize the search at a control set {x(t), z(t)}, which is a solution of equation (32). For problem (II), the constraint used to eliminate one of the two independent control functions from (32) is a functional relationship between x(t) and z(t). One example is the constraint that x(t) and z(t) are linearly related: With this constraint, (32) reduces to By solving equation (44) with the boundary conditions (41b), we obtain the control set: with R(s) = = 1/(2T ). The energy gap corresponding to this control set is i.e. the gap is largest at the ends of the time interval: g(0) = g(1) = 2, and smallest in the middle: g(0.5) = √ 2. Another possible choice is the constraint that x(t) and z(t) are quadratically related: With this constraint, (32) reduces to By solving equation (48) with the boundary conditions (41b), we obtain the control set: x(s) = cos(πs/2), z(s) = sin(πs/2), with R(s) = = π/(8T ). The corresponding energy gap is constant: g(s) = 2.

Numerical optimization results
In the numerical optimizations, we use a local, gradient-based algorithm known as D-MORPH (diffeomorphic modulation under observable-response-preserving homotopy), which was described in detail in [45,46]. Each control field is defined on a time mesh composed of L evenly spaced intervals, i.e. u(t) = {u |t ∈ (t −1 , t ]} L =1 , where t = T /L (in all simulations reported here, we use T /L = 0.01). In this discrete representation, the control variables are the real field values {u } at the L time intervals. At each step of the optimization algorithm after the initialization, each field value u is allowed to vary freely and independently.   Table 1. Optimization results for the AQC problem (I) with H i = σ x and H f = σ x + σ z . Three choices of the initial control set are described in the text. In all optimizations, T = 2.
Initial set: (34) Initial set: (36) Initial set: (39) Results for initial control sets  For problem (I), we perform optimizations starting from the three initial control sets described in section 4.1 above: (34), (36), and (39). For problem (II), we perform optimizations starting from the three initial control sets described in section 4.2 above: (42), (45), and (49). The search algorithm treats x(t) and z(t) as two independent control functions, and both are optimized. § The optimization results are shown in figure 1 and table 1 for problem (I) with T = 2, and in figure 2 and table 2 for problem (II) with T = 3. The figures show the time dependence of the control functions x(s) and z(s), the instantaneous ground-state population P 0 (s), the energy gap g(s), and the adiabatic-condition ratio R(s), for optimizations with α = 0.1. The tables reports values of the target-state fidelity F and the average ground-state population P 0 , for optimizations with six various values of α (ranging from 1 to 10 −5 ). § We also examined an approach, in which only z(t) is optimized, while x(t) is either fixed (for problem (I)) or computed using a functional relationship (for problem (II)), according to the same constraint as in the definition of the respective initial control set. We found that optimizations with two independent control functions consistently attain better solutions than those with a constraint. Table 2. Optimization results for the AQC problem (II) with H i = σ x and H f = σ z . Three choices of the initial control set are described in the text. In all optimizations, T = 3.
Initial set: (42) Initial set: (45) Initial set: (49) Results for initial control sets By inspecting the results obtained for both AQC problems, we observe that the quality of an optimal control solution strongly depends on the respective initial set (i.e. as we expected, the composite objective J = F +J t has multiple local optima). The performance of a dynamic trajectory, in terms of the achieved target-state fidelity and ground-state population, strongly correlates with the size of the energy gap during evolution and the rate of the Hamiltonian change in the regions where the gap is small.
For example, consider the optimization results for problem (I) with the initial control set (34), shown in the left-column panels of figure 1. We see that, for the optimal control set, the gap abruptly increases as soon as the evolution begins (panel (g)), resulting in a significant improvement for the ratio R (panel (j)), but this happens at the cost of a jump change of control values at the first time step (panel (a)), leading to an immediate drop in the groundstate population (panel (d)). A similar behavior is also observed for the optimal control set obtained by starting the search from the initial set (36), as shown in the middle-column panels of figure 1. However, a very different type of dynamics is found for the initial control set (39) and its respective optimal control set, as shown in right-column panels of figure 1. In this case, both x(s) and z(s) grow from the beginning of time evolution (panel (c)), which allows for substantially increased gap values (as compared to linear and nearly linear interpolations) at intermediate times (panel (i)) without the need to abruptly change control values. The optimal control functions are only slightly different from the initial ones (panel (c)), and the respective gaps are almost identical (panel (i)), but the optimal control set utilizes a slower Hamiltonian change, as indicated by the decreased ratio R (panel (l)), in the regions near s = 0 and s = 1, i.e. where the gap is smaller. This decrease in R translates into higher ground-state population and fidelity values (panel (f) and table 1).
For problem (II), initial control sets (42) and (45) produce a symmetric energy gap with a minimum in the middle: g(0.5) = √ 2. The corresponding optimization results (shown in left and middle columns, respectively, of figure 2) indicate that the gap can be increased (panels (g) and (h)), but at the cost of a jump change in the value of z(s) at the first time step (panels (a) and (b)) and an associated drop in the ground-state population (panels (d) and (e)). However, once again, it is possible to find a qualitatively different dynamic regime. Specifically, the initial control set (49) has x(s) = cos(πs/2) and z(s) = sin(πs/2), which are relatively "flat" near s = 0 and s = 1, respectively (panel (c)), yielding a constant gap, g(s) = 2, at all times (panel (i)). Furthermore, the optimization results reveal that by extending the regions in which x(s) and z(s) are "flat" near s = 0 and s = 1, respectively (panel (c)), it is possible to achieve g(s) ≥ 2 at all times, with a maximum gap value near the middle (panel (i)). This is almost an inverse of the standard gap behavior associated with linear and nearly linear interpolations. Data shown in panels (i) and (l) indicate that the improved performance of the optimal control set, as compared to the initial one, is achieved through a combination of a gap increase at intermediate times and a slower Hamiltonian change in the region near s = 1.
Our study demonstrates that the use of two control functions in the Hamiltonian (28) makes it possible to access a broad range of dynamic trajectories. Even with only three examples of initial control sets considered here for each AQC problem, we found a significant variation of the time-dependent gap size, the adiabatic-condition ratio, and the resulting performance characteristics between different trajectories. The adiabatic condition (10) turned out to be a very useful tool that complements QOTC by providing various choices of initial control sets to seed the optimization. This strategy helped us to identify, for each AQC problem, a high-quality optimal control set whose performance is associated with a sizable increase (as compared to linear and nearly linear interpolations) of the energy gap during most of the evolution time. In each considered example, the optimization has improved the fidelity and adiabaticity, in comparison to the initial guess. While a greater relative improvement is achieved through the optimization for poorer-quality initial controls, even the best initial set benefits from the application of QOCT, especially in terms of the increased fidelity.
Results obtained for different values of the weight parameter α (see tables 1 and 2) reveal the competition between the final-time and tracking objectives: while fidelity values extremely close to 1 are found when α is very small (10 −3 and smaller), the highest value of the average ground-state population is typically achieved for α = 0.1. Note that a further increase of the weight parameter to α = 1 does not improve the value of P 0 for problem (I) and even decreases it for problem (II). This behavior is related to the fact that local optima of the composite objective J = F + αP 0 preclude the gradient-based searches from reaching the fidelity-adiabaticity Pareto front, and this effect may become more severe when the weight of the tracking objective is too large.
Overall, the findings of this work show that even for a simple one-qubit model there exists an extensive variety of adiabatic quantum trajectories associated with different control sets, and a proper search can discover solutions with a substantially improved performance.

Conclusions and future directions
The main finding of this work is that the use of multiple control functions provides access to a very rich set of dynamic trajectories, which can be explored through a strategy that combines the adiabatic condition and QOCT. This exploration helps to identify control sets that exhibit a superior performance in terms of a substantial improvement of the target-state fidelity accompanied by an increase of the average ground-state population, under a limitation on the evolution time T . However, the use of a composite objective of the form J = F + J t and a gradient-based optimization method, while simple and numerically efficient, gives only a snapshot of accessible dynamics. More insight into the attainable control performance can be gained by applying global search methods which, while being more numerically expensive, are better suited for quantifying the trade-off between multiple control objectives [57][58][59][60][61][62][63][64]. Although such numerical studies have to be restricted to few-qubit systems, they are likely to significantly expand our understanding of the dynamic mechanisms that help to minimize the loss of adiabaticity for limited evolution times. Furthermore, global optimization methods are applicable directly in the laboratory through the use of the adaptive feedback control approach [18,19,[65][66][67][68].
It is also possible to attempt enforcing adiabaticity by imposing a cost on the control functions only. Specifically, adiabaticity would imply that the controls change slowly. The corresponding tracking objective should serve to minimize time derivatives of the controls, averaged over the duration of evolution and the set of all control functions: Unfortunately, there is no obvious way to compute the functional derivative of the tracking objective J t of (A.4) with respect to the controls. Additional numerical simulations will be needed to determine advantages and disadvantages of these and other alternative costs.
Substituting (21) and (B.3) into (B.2) and evaluating at the optimum with the assumption (B.4), we obtain: (B.5) Note that the expression (B.5) is symmetric in A k (t) and A j (t ) and hence it holds for any order of t and t .