Quantum control design by Lyapunov trajectory tracking for dipole and polarizability coupling

In this paper we analyze the Lyapunov trajectory tracking of the Schrödinger equation for a coupling control operator containing both a linear (dipole) and a quadratic (polarizability) term. We show numerically that the contribution of the quadratic part cannot be exploited by standard trajectory tracking tools and propose two improvements: discontinuous feedback and periodic (time-dependent) feedback. For both cases we present theoretical results and support them by numerical illustrations.


Introduction
We consider in this work the evolution of a quantum system with wavefunction Ψ(t) under the external influence of a laser field; the system satisfies the Time Dependent Schrödinger equation (TDSE) with H(t) a Hermitian operator; the control is realized by selecting a convenient laser intensity u(t). When the laser is shut off H(t) is the internal Hamiltonian of the system, denoted H 0 ; when the laser is present H(t) is the sum of H 0 and additional terms that describe the interaction of the system with the laser field. The first order term is the dipole coupling [30] of the form u(t)H 1 ; in the limit of small laser intensities this term may be enough to adequately describe the interaction. However, situations exist where the dipole coupling does not have enough influence on the system to reach the control goal; the goal may become accessible only after adding a polarizability term u 2 (t)H 2 in the expansion of H(t) (see e.g. [13,14] and related works); to make effective use of this term one needs higher laser intensities u(t).
The focus of the paper is on practical procedures to find suitable control fields u(t) for the Hamiltonian H(t) = H 0 +u(t)H 1 +u 2 (t)H 2 by adapting feedback tracking control procedures to this setting. Here and in the following H 0 , H 1 and H 2 are n×n Hermitian matrices with complex coefficients and the control is the laser intensity u(t) ∈ R.
In what concerns the mere possibility to find a control, we recall that the controllability of the finite dimensional quantum system evolving with equation can be studied via the general accessibility criteria [4,32] based on Lie brackets; more specific results can be found in [34]. Let us consider for a moment the system with Hamiltonian H 0 + u(t)H 1 + v(t)H 2 , v(t) being a second control independent of u(t). It can be shown [34] that this system is controllable under the same circumstances as H 0 + u(t)H 1 + u 2 (t)H 2 i.e. all target states that can be reached with Hamiltonian H 0 + u(t)H 1 + v(t)H 2 can also be reached by H 0 + u(t)H 1 + u 2 (t)H 2 (although obviously the second Hamiltonian is a particular case of the first for v(t) = u 2 (t)). This rather counter-intuitive result suggests that u 2 (t) can be considered, for the purpose of theoretical controllability, as independent of u(t); however, u 2 (t) having a particular functional dependence on u(t) will play a role at the level of the numerical procedure to find the control: in general finding the control for H 0 + u(t)H 1 + u 2 (t)H 2 is more difficult than for H 0 + u(t) The characterization of the controllability does not provide in general a simple and efficient way for open-loop trajectory generation. Optimal control techniques (cf., [23] and [30] and the references herein) provide a first set of methods. A different approach consists in using feedback to generate trajectories and open-loop steering control [5,19,22]. More recent results can be found in [27] for decoupling techniques, in [3,15,17,23,31,35,36] for Lyapunov-based techniques and in [1,7,28] for factorizations techniques of the unitary group.
In order to study feedback control of systems with Hamiltonian H(t) = H 0 + u(t)H 1 + u 2 (t)H 2 we adapt the analysis [20,24], initially proposed for bilinear quantum systems H 0 + u(t)H 1 . In the previous work it has been shown that the success of the feedback control depends on whether there exists (non-zero) direct coupling, through H 1 , between the target state and all other eigenstates. When H 1 has the same property for H(t) = H 0 + u(t)H 1 + u 2 (t)H 2 we show that same feedback formulas hold. However we argued that the polarizability term u 2 (t)H 2 is added when dipole u(t)H 1 is not enough to control the system; consequently the most interesting question is what happens when some of the (direct) coupling is realized by H 2 instead of H 1 . We show that the previous feedback formulas do not hold any more and we propose two alternatives. Our method is valid to track any eigenstate trajectory of a Schrödinger equation (2) when the Hamiltonian includes a second order coupling operator.
The balance of the paper is as follows: in Section 2 we introduce the main notations and the Lyapunov tracking feedback for a particular case. Section 3 contains the presentation of two types of feedback: discontinuous and time-dependent (periodic) forcing, that can be applied for all types of second order coupling operators. Both sections present theoretical results on the convergence illustrated by numerical simulations. Concluding remarks are presented in Section 4.

Dynamics and global phase
We consider a n-level quantum system evolving under the equation (2). The wave function Ψ = (Ψ j ) n j=1 is a vector in C n , verifying n j=1 |Ψ j | 2 = 1, thus it lives on the unit sphere S 2n−1 of C n . Physically, Ψ and e iθ(t) Ψ describe the same physical state for any global phase θ(t) ∈ R, i.e. Ψ 1 and Ψ 2 are identified when exists θ(t) ∈ R such that Ψ 1 = exp(iθ(t))Ψ 2 . To take into account such non trivial geometry we add a second control ω corresponding toθ (see also [24]). Thus we consider the following control system where ω ∈ R is a new control playing the role of a gauge degree of freedom. We can choose it arbitrarily without changing the physical quantities attached to Ψ. With such additional fictitious control ω, we will assume in the sequel that the state space is S 2n−1 and the dynamics given by (3) admits two independent controls u and ω.

Lyapunov control design
Take a reference trajectory t → (Ψ r (t), u r (t), ω r (t)), i.e., a smooth solution of (3): We introduce the following time varying function V (Ψ, t): where .|. denotes the Hermitian product. The fonction V is nonnegative for all t > 0 and all Ψ ∈ S 2n−1 and vanishes when Ψ = Ψ r , such a V is called a Lyapunov function. The derivative of V is given by: where Im denotes the imaginary part. For convenience we denote: I 1 = Im( H 1 Ψ(t)|Ψ r ) and I 2 = Im( H 2 Ψ(t)|Ψ r ). Note that if, for example, one takes with k and c strictly positive parameters, one gets and thus V is nonincreasing.
Let us focus on the important case when the reference trajectory corresponds to an equilibrium: u r = 0, ω r = −λ and Ψ r = φ where φ is an eigenvector of H 0 associated to the eigenvalue λ ∈ R (H 0 φ = λφ, φ = 1). We obtain: Then (6) becomes a static-state feedback: Denote by λ j , with j = 1, ..., N the eigenvalues of the matrix H 0 . Let φ 1 , . . . , φ n be an orthogonal system of corresponding eigenvectors.
We say that H 0 has non-degenerate spectrum if for all j = l, j, l = 1, ..., n, λ j = λ l . Although V being nonincreasing is a very important property, this is not enough to ensure that the target state φ is reached asymptotically. The following theoretical result for the feedback (8) explains when convergence to target state holds: with Ψ ∈ S 2n−1 and an eigenstate φ ∈ S 2n−1 of H 0 associated to the eigenvalue λ. Take the static feedback (8) with c > 0, k < 1 H 2 and suppose that the spectrum of H 0 is non-degenerate. Then the two following propositions are true: The proof follows the same ideas as in [24] and it can be found in [16].

Remark 2.2
The theorem above shows that tracking to φ works when all eigenstates of H 0 , φ 2 , ....φ n , other than φ are coupled to φ by H 1 . However we do not know what happens when some of the coupling are realized by H 2 instead (the theorem does not apply but the system is still controllable cf. [34]). We analyze such a case in Section 3. Note that, as pointed out in the Introduction, one uses the model H 0 + u(t)H 1 + u 2 (t)H 2 precisely in the cases when H 1 coupling is not enough to control (otherwise taking low laser intensities u(t) make H 0 + u(t)H 1 effective Hamiltonian instead of H 0 + u(t)H 1 + u 2 (t)H 2 and H 2 is not longer used to model the system).

Examples and simulations
In order to solve (3), we use the following numerical scheme: where m is the index of the time step, ∆t = T /M is the discretization time step, and M is the total number of time steps. Numerical simulations have been performed for a three-dimensional test system with H 0 , H 1 and H 2 given by: In this case the wave function is Ψ = (Ψ 1 , Ψ 2 , Ψ 3 ) T . We use the Lyapunov control (8) in order to reach the first eigenstate φ = (1, 0, 0) of energy λ = 0, at the final time T .

Remark 2.3
We note that the conditions of Theorem 2.1 are fulfilled since: In Figure 1 we plot the evolution of V = V (Ψ) = Ψ − φ|Ψ − φ and u, corresponding to system defined by (10) with feedback (8). We can remark a fast convergence of the Lyapunov function V towards zero, that implies the convergence of Ψ towards φ = (1, 0, 0). The target goal is achieved with high accuracy at T = 100. We consider another three-dimensional test system with H 0 , H 1 and H 2 given by: In this case the wave function is Ψ = (Ψ 1 , Ψ 2 , Ψ 3 ) T . We use the previous Lyapunov control in order to reach the first eigenstate φ = (1, 0, 0) of energy λ = 0, at the final time T .
Simulations of Figure 2 start with (0, 1/ √ 2, 1/ √ 2) as initial condition for Ψ. Such a feedback reduces the distance to the first state but does not ensure its convergence to φ = (1, 0, 0) (the Lyapunov function V (Ψ), does not reach the value zero, but stalls at V = 10 −0.1 .) This is not due to a lack of controllability. This system is controllable since the Lie algebra spanned by H 0 /i, H 1 /i and H 2 /i coincides with u(3) (see [24]). As explained in Remark 2.2, such convergence deficiency comes from the fact that operator H 1 couples φ only with the state φ 2 . We plot the evolution of V (Ψ), u, I 1 and I 2 , corresponding to system defined by (11) with feedback (8), in Figure 2 and Figure  3.  Figure 3. Time evolution of I 1 and I 2 ; system defined by (11) with feedback (8). We note that I 1 converges to zero. Contrary to I 1 , I 2 does not converge to zero.

Discontinuous and periodic feedback
In order to stabilize the system when formulas (8) are ineffective, we propose two methods. The first one is to use a special discontinuous feedback ( [2,6,12,26], as well as [10,Section 11.4] and the references therein). The second approach is through periodic time dependent feedback ( [8,9] as well as [10, Sections 11.2 and 12.4] and the references therein).

Discontinuous feedback
For the case of discontinuous feedback we consider the regions: Note that A, B, C are open sets; the regions A, C, respectively B, C are overlapping. For k 1 , k 2 , c, δ > 0 we define the control as follows: The definition of u(I 1 , I 2 ) on A ∩ C is either u(I 1 , I 2 ) = k 1 I 2 (i.e. formula for set A) or u(I 1 , I 2 ) = −k 2 I 1 /(1 + k 2 I 2 ) (i.e. formula for set C) is such a way to ensure continuity (with respect to time) of the feedback u(I 1 , I 2 ). Remark that a discontinuity may appear when reaching the border ∂C of C situated in the interior of A (here the formula used is always u(I 1 , I 2 ) = k 1 I 2 ). Same considerations apply for A ∩ B (continuity).
We define the propagator S 1 Ψ 0 by solving the feedback equation such that: if the initial state Ψ 0 ∈ A ∩ C, we initiate with the feedback corresponding to C and if the initial state Ψ 0 ∈ B ∩ C, we initiate also with the feedback corresponding to C. We define also the propagator S 2 Ψ 0 by solving the feedback equation such that : if the initial state Ψ 0 ∈ A ∩ C, we initiate with the feedback corresponding to A and if the initial state Ψ 0 ∈ B ∩ C, we initiate also with the feedback corresponding to B. We continue with the feedback for the given region until reaching the boundary of this one, then we switch to the feedback corresponding to the next overlapping region. Observe that neither S 1 nor S 2 define a classical dynamical system, that is the semigroup property is lost. One has instead: The propagators S 1 (t)Ψ 0 and S 2 (t)Ψ 0 are solutions in the sense of Carathéodory for the feedback controlled system and depend continuously on the initial data. These solutions are well defined locally and they are globally defined on [0, ∞[; this follows from the fact that intervals of time between switching moments are bounded from bellow by a strictly positive constant.
We now turn to the question of the negativity of dV /dt.
(i) when u(I 1 , I 2 ) = −k 2 I 1 /(1 + k 2 I 2 ), i.e. in the region C \ (A ∪ B) and possibly A ∩ C and B ∩ C, we obtain the following constraint on k 2 (cf. also Remark 2.1): k 2 < 1 H 2 ; with this provision dV /dt < 0 in this region. (ii) when u = 0, i.e. in the region B \ C and possibly C ∩ B: dV /dt = 0 (iii) when u(I 1 , I 2 ) = k 1 I 2 , i.e. in the region A \ C and possibly A ∩ C: dV /dt = uI 1 + u 2 I 2 = u(I 1 + uI 2 ) = k 1 I 2 (I 1 + k 1 (I 2 ) 2 ). On the set A the term I 2 is less than − √ δ and thus k 1 I 2 is strictly negative. From the definition of A the term I 1 + k 1 (I 2 ) 2 is lower bounded by −δ + k 1 δ; thus for k 1 > 1 the term I 1 + k 1 (I 2 ) 2 is strictly positive thus dV /dt < 0.
and an eigenstate φ ∈ S 2n−1 C of H 0 associated to the eigenvalue λ. Take the feedback (12) with k 1 > 1, k 2 < 1 H 2 and c, δ > 0. If H 0 is not degenerate and for every k with φ k = φ either φ k |H 1 φ = 0 or φ k |H 2 φ = 0 then the limit set of Ψ(t) reduces to a solution of the uncontrolled system, with |I 1 | < δ, |I 2 | ≤ C √ δ with a constant C depending only on H 0 .
Proof of Theorem 3.1. Up to a shift on ω and H 0 , we can assume that λ = 0. Trajectories corresponding to the propagator S 1 are relatively compact so the limit points at infinity form a limit set Ω δ which is compact and connected.
The limit set Ω δ is a union of trajectories of equation (3) corresponding either to the propagator S 1 or to the propagator S 2 , along this trajectories V is constant, so dV /dt = 0 where V is defined by (4). The equation dV /dt = 0 means that: Im Ψ, φ = 0.
Since u is defined by (12) it follows that the limit set, Ω δ , consists in fact of trajectories of the uncontrolled system: with the solutions of the form: For the same reasons as above we obtain that the limit set Ω δ is characterized by: From relation (18) we have that on the limit set |I 1 | < δ. We substitute (17) in (15) and we have: We obtain Im(b 1 ) = 0. We denote by J 1 = {j|j = 1, H 1 φ j |φ ) = 0} and J 2 = {j|j = 1, H 2 φ j |φ ) = 0}. We have by the hypothesis that J 1 ∪ J 2 = {2, 3, . . . , n}.
We substitute (17) in (7), and we obtain: Since Im(b 1 ) = 0 we have: where the coefficients B j = 0 if and only if b j = 0, j ∈ J 2 . We define M = sup(I 2 ) and m = inf(I 2 ). There exists C > 0 independent of B j and θ j such that M −Cm. Since on the limit set Ω δ , I 2 −2 √ δ it is easy to verify that |I 2 | C √ δ.
Remark 3.1 In order to make the conclusion of the theorem more precise note that if Ψ δ is a trajectory of (16) belonging to Ω δ , then when δ converges to zero, Ψ δ → φ , if the initial state is different of −φ. Accordingly, when I 1 , I 2 are small V (Ψ) will also be small and the system is close to the target state. The practical question is then how small should one choose δ. A way to circumvent this question is to consider not a constant value δ but one that decreases over time; this way the problem will take itself care of finding the good value of δ for a given precision. Numerical results (not shown here) confirm the interest of this approach.
Remark 3.2 An important ingredient of the proof is finding the limit sets of the evolution, which itself depends very much on the choice of the sets A, B, C and of the controls u. The general rationale behind these choice are to modify formula (8) minimally in order to have good properties of Ω δ .
It appears that this feedback is quite efficient for system (11). We present the evolution of I 1 and I 2 corresponding to system defined by (11), with feedback (12), in Figure 5.  Figure 5. Time evolution of I 1 and I 2 ; system defined by (11) with feedback (12); |I 1 | < δ and |I 2 | Cδ.
We present the evolution of I 1 and I 2 corresponding to system defined by (23), with feedback (12), in Figure 7.  Figure 7. Time evolution of I 1 and I 2 ; system defined by (23) with feedback (12); |I 1 | < δ and |I 2 | Cδ.

Examples for degenerate cases.
There are various situations where the condition of non degeneracy of the Hamiltonian H 0 , present in Theorem 2.1 and Theorem 3.1 is non fulfilled. One such example is given below (see [18,25]): The internal Hamiltonian H 0 is degenerate since λ 3 = λ 4 = 0.095683, but it can be stabilized using the discontinuous feedback defined by (12). Here H 2 = 1.
Simulations of Figure 8 describe the evolution of the Lyapunov function V (Ψ) and control u, system defined by (24) starting from the initial state Ψ(t = 0) This positive result for degenerate system shows that the theoretical results are sufficient but not necessary; however the approach may fail in some particular degenerate cases.This is consistent with the literature on quantum control that shows that degenerate cases have special structure (starting even with controllability criteria).

Periodic feedback
Although the discontinuous feedback (12) gives satisfactory results in terms of the control quality, the fact that it is discontinuous motivates trying to find additional procedures. To this end we introduce in this section a periodic, time dependent, feedback u = u(t, Ψ) stabilizing (3) to the ground state φ. The idea is to use a highly oscillatory field component whose linear contribution averages to zero while the quadratic part averages to a constant; then we compare the asymptotic behavior of the system with the behavior of the averaged system. We recall that we are in the case when the reference trajectory corresponds to an equilibrium. We consider the following time dependent feedback: We substitute (25) in (3) and we obtain the system: The averaged system is given by (see [21] pages 402-410): We identify the coefficients α and β such that the averaged system is asymptotically stable. We use a Lyapunov technique to stabilize the averaged system (27) around the ground state φ. We take again the function V defined by (4), which is nonnegative for all Ψ ∈ S 2n−1 and vanishes when Ψ = φ.
Proof of Theorem 3.2 Up to a shift on ω and H 0 , we may assume that λ = 0. LaSalle's principle (see, e.g., [21,Theorem 3.4,page 115]) says that the trajectories of the system (27) converge to the largest invariant set contained in dV av /dt = 0. The equation dV /dt = 0 means that: and therefore α = β = 0. On the Ω-limit set of a trajectory, V is constant. Since the Ω-limit set is invariant under the flow generated by (27) it follows that it consists in fact of trajectories of the uncontrolled system: The solutions of (33) have the form: We substitute (34) in (32) and we obtain: From equation (32) and (35) we obtain that Im(b 1 ) = 0. Since along the trajectories in Ω, I av 1 ≡ 0 we obtain b j = 0, j ∈ J 1 . Using Im(b 1 ) = 0 we have: where the coefficients B j = 0 if and only if b j = 0, j ∈ J 2 . Since I av 2 ≥ 0 ∀t, it follows that I 2 av ≡ 0. We have thus b j = 0, j = 1. We obtain that Ω ⊂ {φ, −φ}. This concludes the proof of Theorem 3.2.
Our next theorem shows that our time-varying feedback laws lead to some kind of "practical global asymptotic stability on S 2n−1 \ {−φ}" if ε > 0 is small enough and if the assumptions of Theorem 3.2 hold (see also [11]). Using (47) and applying (46) with τ + T for the new value of τ , one gets that Keeping going, an easy induction argument on the integer m shows that, more generally, for every nonnegative integer m, which implies (39).

Examples for degenerate cases.
We take the system defined by (24) and we apply the periodic feedback (25) with α et β defined by (29). Simulations of Figure 12 describe the evolution of the Lyapunov function V (Ψ) and control u, system defined by (24) starting from the initial state Ψ(t = 0) = (0, 1/ √ 3, 1/ √ 3, 1/ √ 3). We present the evolution of I 1 and I 2 corresponding to system defined by (24), with feedback (25), in Figure 13.

Conclusions
We focus in this paper on designing trajectory tracking (feedback) procedures for a control system with polarizability terms u 2 (t)H 2 present. We find that a straightforward application of the previous results only work for systems that are controllable without the polarizability term. To be able to find a control field that exploit the polarizability coupling we propose two different solutions: the first one is to use a discontinuous feedback with memory terms, the other is to use time-dependent (periodic) forcing. In both cases we present related theoretical results and numerically implement these techniques on prototypical examples. The time-dependent feedback is seen to generally produce smoother controls.