Tailoring Spin Chain Dynamics for Fractional Revivals

The production of quantum states required for use in quantum protocols&technologies is studied by developing the tools to re-engineer a perfect state transfer spin chain so that a separable input excitation is output over multiple sites. We concentrate in particular on cases where the excitation is superposed over a small subset of the qubits on the spin chain, known as fractional revivals, demonstrating that spin chains are capable of producing a far greater range of fractional revivals than previously known, at high speed. We also provide a numerical technique for generating chains that produce arbitrary single-excitation states, such as the W state.


Introduction
The task of quantum state synthesis lies at the heart of quantum technologies -before any quantum protocol can be run, be it a Bell test [1], quantum key distribution [2], quantum cloning [3][4][5], random number generation [6] or quantum computation [7], a non-trivial quantum resource, such as a Bell state, W -state or GHZ state must be prepared. Since the availability of this resource gives the protocol its power, it is crucial to understand how these states may best be prepared, taking into account locality constraints, control constraints etc. that are imposed upon a particular experiment.
To that end, we embrace the perspective of perfect state transfer [8][9][10][11][12], wherein one engineers a simple, one-dimensional system so that it accomplishes a particular task without any further user interaction. The control required of the system is restricted to the manufacturing stage, which can be verified before use. These schemes had the unexpected benefit of being up to twice as fast as the equivalent consecutive sequences of swap gates specified by the gate model [13]. Once this limiting case of state transfer was established [9,11,12], a multitude of different schemes, specialised to different experimental constraints have been derived [10,[14][15][16]. We aim to enable this diversification for the state synthesis task. The solutions for perfect state transfer already provide examples of state synthesis by generating entanglement, both bipartite [11] and that required for cluster states [17], while a beautiful transformation [12] of these coupling schemes permits superposition of the input state over the two extremal sites of the chain [12,[18][19][20].
Here, we take the existing constructions for perfect state transfer and re-engineer them to produce arbitrary (one-excitation) quantum states, concentrating on the particular case of so-called fractional revivals wherein the amplitude of the final state is spread over a small number of sites on the chain. These admit the possibility of analysis (Sections 2 and 3), while we also provide a widely applicable numerical scheme (Section 5), permitting the creation of W -states and similar, along with a starting point that appears to work well for systems of up to about 50 qubits. This complements our recent results [21] which showed that almost any one excitation quantum state can be created by these spin chains, with the fractional revivals being the particularly challenging cases. Moreover, in Section 6 we will show that our constructions are near-optimal, achieving the desired evolution in approximately half the time required by the solutions in [21], and are quite robust against imperfections (Section 7).

Setting
Consider a system of size N , with states |1 , . . . , |N , and a system Hamiltonian J n (|n n + 1| + |n + 1 n|). This corresponds, for example, to N qubits in a line, coupled by a nearest-neighbour XX or Heisenberg Hamiltonian, restricted to the one-excitation subspace although there are various other mappings [22], including free-fermion models such as the transverse Ising model. We denote the spectrum of H by {λ n }, and the corresponding eigenvectors |λ n have elements λ n,1 = 1|λ n . Our aim is to specify the magnetic fields {B n } and coupling strengths {J n } such that the transformation is realised in a time t 0 , where the α n are all assumed to be real. More precisely, we require that there exists some global phase φ such that Following [12], we take the inner product with an eigenvector |λ n , giving λ n | e −iHt0 |1 = e iφ λ n |ψ T . In other words, λ n,1 = e iφ+iλnt0 λ n |ψ T for all n. By imposing that the α n are real, this can only be true if e iφ+iλnt0 = ±1 and λ n,1 = ± λ n |ψ T , where the two equations choose the same ±1 factor for each n. These are necessary conditions for the state synthesis task.
As perfect state transfer is a special case of state synthesis, with |ψ T = |N , it is clear that these conditions are not always sufficient -in that case, it is required that λ n,1 = (−1) n+1 λ n |ψ T when the eigenvectors are ordered by decreasing eigenvalue.
As an aside, we mention that, in a similar fashion to perfect state transfer [23][24][25], arbitrarily accurate solutions to the state synthesis problem are far more common. If we can find a chain for which λ n |ψ T = ±λ n,1 for all n, and the ratios of differences of eigenvalues are all irrational, then we can always wait long enough for the different phases to approximate the pattern e −iλnt = λ n |ψ T /λ n,1 , and the analysis of the typical transfer time in [25] is similarly applicable here. However, unlike perfect state transfer (where a symmetry condition arises naturally), it is not a priori clear how to fix the conditions λ n |ψ T = ±λ n,1 . That is the main challenge that this work addresses. Our philosophy here, therefore, is to start from chains where we know this is true for some different target state (|N ); the perfect state transfer chains, and to learn how to modify them appropriately for the true target state, while focussing on perfect solutions at a well-defined time rather than arbitrarily accurate solutions at an illdefined time. Moreover, since the satisfying spectra for perfect state synthesis are discrete, we will select a fixed spectrum, and work constantly with that. We will rely extensively on the Lanczos algorithm, outlined briefly in the next subsection, to propagate any alterations that we make to the entire chain, ensuring that the spectrum of the system is kept fixed at this discrete choice.

Lanczos Algorithm
We will make use of the standard Lanczos Algorithm in our constructions [26]. This is an iterative algorithm which, at each step, takes as input the eigenvalues {λ n }, the eigenvector elements at a particular site m, λ n,m , and the coupling strength J m−1 (J 0 = 0 to get the algorithm started). First, it calculates the magnetic field then uses that to give the next coupling strength, J m : Finally, we use the eigenvector relations to derive the next eigenvector elements, so that we have the required inputs for the next step of the algorithm. In this way, we can derive all the parameters of the Hamiltonian, and the eigenvectors, starting from a desired spectrum and the eigenvector amplitudes on the first site of the chain. This construction has been used extensively in the study of perfect state transfer, with the connection first being realised in [14]. Indeed, having established the necessary and sufficient conditions for perfect state transfer [12], all solutions are either found as analytic solutions, such as [9,27], or by fixing the spectrum and solving the Lanczos algorithm. The iteration is simply started by recognising that a perfect state transfer chain must have symmetric couplings, and so once a spectrum is fixed, that fixes the λ n,1 .

Modifying Perfect State Transfer
For the task specified by Eq. (1), we have established that the eigenvalues of H are tightly constrained -it must be that λ n |ψ T = ±λ n,1 and e −iλnt0 = ±e iφ . We are going to select a particular spectrum that satisfies these conditions. Since they are reminiscent of the necessary and sufficient conditions for perfect state transfer [12] (the ordered eigenvalues λ n > λ n+1 fulfil e −iλnπ/2 = (−1) n+1 with t 0 = π/2), we proceed by assuming that e −iλnπ/2 = (−1) n+1 . Under this assumption, every satisfying choice of {λ n } corresponds uniquely to a perfect state transfer HamiltonianH, with fieldsB n and coupling strengthsJ n .
There is no reason that one has to start by assuming the connection to a perfect state transfer system. Any existing solution that satisfies the eigenvalue conditions e iφ+iλnt0 = ±1 will do, at the cost of making the calculations slightly more complex. However, they will naturally lend themselves to different state synthesis tasks, specifically being able to produce outcomes that are in some sense close to the state produced bỹ H. Since such states will typically be superpositions of the single excitation across many sites, which we already know how to address via different insights [21], it makes most sense to concentrate onH being a perfect state transfer Hamiltonian, and attempting to modify it in order to create superpositions of states on just a small number of sites.
Example: For the case N = 5, we can select the spectrum to be {4, 2, 0, −2, −4}. There is a corresponding perfect state transfer Hamiltoniañ We will use theseH as the starting point for our solutions. They can be used to define a basis 1 |ṽ m = The choice of these bases is one of mathematical convenience, and does not exactly correspond to anything physical. That said, they clearly encapsulate the information about the two systems in a very useful way, facilitating the calculations of functions such as 1| f (H) |n simply by evaluating N k=1 f (λ k ) k| |v n . 1 To prove that |ṽm forms a basis, write the elements out as columns of a matrix. If the matrix has non-zero determinant, the vectors span the space. Taking out a common non-zero factorλ n,1 from each row n returns a matrix that is just the eigenvectors of H, which are all mutually orthogonal, and therefore has non-zero determinant.
This includes normalisation (f (H) = 1) and time evolution (f (H) = e −iHt ). Furthermore, one naive method for implementing the conditions that we want is to simultaneously solve which is closely connected. Indeed, our method essentially reduces to this calculation, except that our formalism will lend itself to finding instances in which the calculations are vastly easier to perform .
By definition, one basis can be written in terms of the other. We use the coefficients β which we often write as a table, m specifying the rows, and n the columns. Our aim is to find the vector |v 1 . This contains the elements λ 2 n,1 which, together with the target spectrum, are the inputs for the Lanczos algorithm, and will thus specify H. In practice, this will be expressed by the β (1) m and the (known) eigenvectors ofH.
Many of the coefficients β Thus, the top row of the β- In terms of the eigenvectors, this is recalling that the evolution phase is alternately ±1. Meanwhile, can similarly be expressed as Again, we expand the two bases in terms of each other, Thus, the bottom row of the β-table is simply the target amplitudes.
The entries of the β-table are related via A full derivation is given in the Appendix. This imposes a consistency condition for each element of the β-table. Applying it to the top-row condition of Eq.
(2) reveals that β Consequently, the right-hand column now reads |v N = α N |ṽ N . Contiguous sets of 0s on the bottom row can also be propagated upwards using these relations, as demonstrated in the following example. Resolving all these consistency conditions yields all the system parameters of the solution.
A further necessary condition on the state synthesis task is α N = 0. This is a result of applying Eq. (4) for the element |n, n − 1 (n > 1), which implies For a chain of length N , we require J n = 0 for all n = 1, . . . , N − 1, discounting the possibility of producing two distinct chains. Thus, α N = β (N ) N = 0; the synthesised state must have overlap with the end qubit.
Example: For N = 5, we aim to create an evolution |1 → (|4 + |5 )/ √ 2 in a time t 0 = π/2 using the spectrum {4, 2, 0, −2, −4}. The β-table has the structure: This is complete except for evaluation of the consistency conditions (Eq. (4)) on the four diagonals |n, n + k for all n and k = −1, 0, 1, 2. As we'll see in Sec. 3, it is not necessary to complete all these values, but for the sake of exposition, we evaluate the consistency conditions on the diagonals k = −1, 2. These reveal that The remaining consistency conditions, on the diagonals k = 0, 1 then yield Simultaneous solution (eventually) fixes the relevant couplings to be Accepted in Quantum 2017-08-09, click title to verify

Fractional Revivals
Generically, the values {β (1) m } are hard to derive in terms of the α n . However, the purpose of selecting the basis |ṽ m for decomposing |v 1 is that certain special cases of particular interest are not as hard as the generic case. We now specialise to the evolution for r = 1, N . In this case, since λ n,1 (−1) n+1 = ψ T |λ n , we can multiply by λ n,1 and use that |v N = α N |ṽ N : To evaluate this, note that using the symmetry property of eigenvectors in perfect state transfer chains. Thus, The basis on the |ṽ n can safely be relabelled to make it easier to work with: where S = N n=1 |n N + 1 − n|. This relationship will permit us to derive the desired β allowing us to identify that λ 2 n,1 =λ n,1 (λ n,1 + α 1λn,N ).
For the standard solution of spin chains [9], we have the analytic expression for theλ n,1 of From here, the Lanczos algorithm proceeds as normal.
Eq. (8) is particularly compelling when r is large. For r = N − 1, we recall that most of the column of the βtable has already been completed: i.e. there is only one undetermined value β N +1−m and, indeed, β Furthermore, this parameter can be straightforwardly evaluated using normalisation -since |v N = α N |ṽ N , it follows that Substituting the definitions reveals that where we have invoked the symmetry property for perfect state transfer of λ n,m = (−1) n+1 λ n,N +1−m and the relationJ 1λn,2 +B 1λn,1 = λ nλn,1 . This is equivalent to solving which always has a real solution.
As r decreases, the number of parameters increases correspondingly, rendering the solution more difficult to derive. However, if α N +1−2m = 0 for all m, then the complexity can be reduced by assuming that β (n) m = 0 for all n + m even (which also imposes that B n = B n for all sites). Fig. 1 depicts the evolution of a 15 qubit system designed to implement |1 More generally, if the last k amplitudes (and α 1 ) are to be non-zero, then the first k coefficients (k < N/2) β via Eq. (4). For instance,

Transfer from Middle
Our constructions so far are good at creating perfect revivals that are localised at the ends of the chain, but not in the middle. However, we can make use of an observation that originates in [28,29]   the evolution |1 → (|1 + √ 2 |3 + √ 2 |11 )/ √ 5 2 , and produced a corresponding H of 21 sites that achieves |11 → (|1 + |9 + |11 + |13 + |21 )/ √ 5. The further advantage is in speed; it will typically take about half the time to generate a particular state starting from the middle rather than one end because the excitation only has half as far to go.

Numerical Approach
With a limited range of analytic solutions, we seek numerical techniques for generating a wider range of evolutions. A perturbative scheme for the {β (1) m }, as opposed to examining the Hamiltonian perturbation, has the advantage of being isospectral by construction, with correspondingly fewer parameters to determine. A first order perturbative expansion is easily applied to Eq. (4) provided one knows how the J n and B n are perturbed. These shifts may be derived from the identities 1|H n−2 β (n−1) where β (n) = m β (n) m |m . Practically, this involves ensuring that δβ (n) n−1 = δβ (n) n−2 = 0 for all n. To tolerate the high degree of non-linearity in the system, a good initial guess is essential. The choice uniform spectrum λ n = (N + 1) − 2n, and t 0 = π/2, yields an output state with a roughly uniform spread of amplitudes for chains of length up to N ≈ 50, while fixing β (1) N = α 1 . When N = 21, this choice produces an output |ψ out with ψ out |ψ T = 0.985 where |ψ T is the W -state. This is close enough that a perturbative approach stands a good chance of converging. Fig. 4 depicts the evolution of one such system, whose value ψ out |ψ T > 1 − 10 −24 .
As before, if the target state has α N +1−2n = 0 for all n, one can assume that β

Speed of State Synthesis
As is the case for perfect state transfer [13], state synthesis is usually substantially quicker than via a gate decomposition that has the same locality constraints. For example, if J max = max{J n }, then the W -state synthesis example of Fig. 4 has J max t 0 = 14.6, while a sequence of consecutive swaps of strength J max creating the transformations has J max t 0 = 23.0. For other systems sizes, the values are given for comparison in Fig. 6. We would now like to justify that our choice of spectrum leads to a near-optimal state synthesis time. Consider any spectrum that is compatible with state synthesis, i.e. λ n = πm n /t 0 where m n are distinct integers. Without loss of generality, one value is m k = 0 (simply shifting all eigenvalues by the same amount only changes the Hamiltonian by an irrelevant identity matrix). We can use the Lanczos algorithm to construct a symmetricH (meaning thatB N +1−n =B n andJ n =J N −n ) with that spectrum and positiveJ n . The k th eigenvector, corresponding to the zero eigenvalue, has weight w = 1|λ k 2 on the first site of the chain. From [26], the coupling strengths are related to the eigenvalues via  This yields a simple inequality .
The smallest possible product of integers is ((N − 1)/2)! 2 , andα N is no larger than 1 (both corresponding to our chosen perfect state transfer chain), meaning In the large N limit, Stirling's formula reveals that independent of the target state, provided wα N is not exponentially small 3 . This is essentially a Lieb-Robinson bound for the system [30], but is tighter than the general bounds, which numerically appear to give J max t 0 ≥ (N − 1)/2 [31], by virtue of specialising to the time invariant case and specific form of the Hamiltonian. Nevertheless, the difference is astoundingly slim -if solutions can be tight to the bound, there is little speed to be gained in moving from a fixed local Hamiltonian to one with arbitrary local controls! Without a useful bound on the value of w, the bounds only apply to the large size limit and cannot be adapted to the finite size 3 Of course, we select α N to be a particular value, say 1/ √ N for the W-state. For our chosen spectrum, w = 2 1−N N −1 , and is therefore not exponentially small.
Accepted in Quantum 2017-08-09, click title to verify Figure 7: When the solution depicted in Fig. 4 for creating a W-state is perturbed, the output remains at high fidelity.
case. Instead, we compare these Lieb-Robinson style bounds to the maximum coupling strength involved in two systems -one that generates W states, and one that performs perfect state transfer [9]. These are depicted in Fig. 6.

Robustness
Inevitably, any real experiment is imperfect, from inaccuracies in the intended coupling strengths and magnetic fields through to dynamic errors. In this section, we do not address the full spectrum of possibilities, merely aim to justify that the solutions presented so far have a basic level of robustness. To that end, we concentrate on manufacturing imperfections, shifting each coupling and magnetic field by a random small fraction. We compare the average arrival fidelity of the target state to the best out of 10000 realisations selected uniformly at random. While the average is what we might expect from the performance of any single instance, the advantage of prior manufacture of a fixed device is the facility to make several, test them, and choose the best. We examine two different, representative, cases. The first is the W-state production of Fig. 4, depicted in Fig. 7. The second is an analytic revival on two sites, chosen because, from the evolution depicted in Fig. 1, one might anticipate a particular dependence upon intricate interferences, and therefore exhibit notable susceptibility to imperfections. Such concerns appear to be unfounded, see Fig. 8.

Conclusions
Many different cases of fractional revivals can be reengineered from a perfect state transfer chain, meaning that a single excitation can be input at one end of a chain, and the natural dynamics evolve it into the desired superposition of that single excitation across a small number of sites, usually localised at either end of the chain. We have also described a perturbative technique that admits the possibility of moving beyond the analytically tractable cases and yet still produces useful coupling schemes for a variety of quantum state synthesis tasks. The solutions are robust against imperfections, and are near-optimal in speed for small system sizes. An important assumption is that all the amplitudes in the target state are real. Supporting calculations are provided via a Mathematica workbook [32]. Experimental prospects for this work are good. The basic technology of evanescently-coupled waveguides has already been applied to perfect state transfer [33]. Moreover, the tasks considered here only involve a single excitation, not a superposition of states, so one does not require the additional lengths of more recent experiments [34,35]. However, the efficacy of such a scheme would have to be compared to other methods such as [36].
We anticipate that a wide variety of other systems, with varying degrees of control, should also be capable of state synthesis, and exploring these is likely to be most beneficial to experiments. Another extremal case is a network of uniformly coupled spins. What network topologies permit the creation of states such as the W state (aside from the trivial star network)? The basic properties, such as the necessary conditions, derived here will also be relevant [37].