Generating Quantum States through Spin Chain Dynamics

Spin chains can realise perfect quantum state transfer between the two ends via judicious choice of coupling strengths. In this paper, we study what other states can be created by engineering a spin chain. We conclude that, up to local phases, all single excitation quantum states with support on every site of the chain can be created. We pay particular attention to the generation of W-states that are superposed over every site of the chain.


INTRODUCTION
Spin chains are good models for a large variety of one-dimensional systems that exhibit quantum effects. For the past decade, these systems have been intensively studied from the perspective of quantum information -understanding how these chains can be used to implement the tasks that we specify. Perfect state transfer (see, for example, [1][2][3][4][5])making particular choices of the couplings strengths and magnetic fields such that a single qubit state |ψ on the first spin at time t = 0 arrives perfectly at the last spin at the state transfer time, t 0 -is the typical case examined. The same solutions generate entanglement, both bipartite [6] and that required for cluster states [7]. Simple modification of these coupling schemes permits fractional revivals [5,[8][9][10] -superposing the input state over the two extremal sites of the chain. Meanwhile, modification of the form of the Hamiltonian has demonstrated that other tasks can be achieved, such as the generation of a GHZ state [11].
In this paper, we address the question of what other functions a spin chain can realise. In answer, we demonstrate that a wide range of one-excitation states can be generated by evolving an excitation initially located on a single site, including the important case of the W state of N qubits. The solution is related to the study of inverse eigenvalue and inverse eigenmode problems [12]. However, the variant that we require is, to our knowledge, unstudied. Although we prove that a solution to this variant cannot be guaranteed, we provide a protocol that yields strategies that are sufficient for our needs.

Setting
The Hamiltonian of a spin chain of length N is where X n denotes the Pauli X matrix applied to site n (and 1 elsewhere). It is excitation preserving, H, N n=1 Z n = 0, meaning that, for example, any one-excitation state (a state of N − 1 |0 s and one |1 ) can only be evolved into another one-excitation state. Indeed, the Hamiltonian when restricted to the first excitation subspace is described as where |n := |0 ⊗(n−1) |1 |0 ⊗(N −n) . This is a real, symmetric, tridiagonal matrix where each of the elements can be independently specified, making it ideal for the engineering tasks that we intend to study. Moreover, via the Jordan-Wigner transformation, one can readily describe the evolution of higher excitation states in terms of the evolution of single excitation states.
We will study the following problem: We do this by showing it is sufficient to ensure that the Hamiltonian H 1 has eigenvalues which satisfy a particular property, and by fixing one of the eigenvectors. We show that, under this particular mapping of the problem, there are instances where there is no solution, and instances when the solution is not unique. However, in the practical sense of answering 1, we provide a technique that gives arbitrarily high quality results for all but a very small category of possible cases (specified by a property on the α n ).

THE HYBRID INVERSE EIGENVALUE/MODE PROBLEM
has a solution to Problem 2 with a spectrum in which e −it0η = 1 and e −it0λ = −1 for all λ ∈ Λ \ η.
Note that this result is only sufficient -it is certainly not necessary as it does not include the case of perfect state transfer or fractional revivals (for N > 3) because these cases have k = 1 and η 2 , . . . , η N −1 = 0.
To our knowledge, the construction of tridiagonal matrices with a specific spectrum and a specific eigenvector has not been studied, although the independent questions of inverse eigenvalue [14] and inverse eigenmode [13] problems have been examined. As such, we are interested in categorising when solutions to Problem 2 exist, and how to find them.
We start by making an observation about the necessary pattern of signs of the coupling strengths such that a specified eigenvector can correspond to a particular eigenvalue in the ordered sequence. Recall [13] that if all the J n are negative, the eigenvector with the n th largest eigenvalue has N − n sign changes in its amplitudes. Thus, to ensure that a particular eigenvector |η has the n th largest eigenvalue, find a diagonal matrix D, with D 2 = 1 such that D |η has N − n sign changes. Thus, if matrix H 1 has coupling strengths J n which are all negative, and an eigenvector D |η which has N − n sign changes, and thus has the n th largest eigenvalue, the matrix DH 1 D has the same magnetic fields, the coupling strengths are the same up to sign changes sign(J m ) = −D m D m+1 , and |η is an eigenvector. Moreover, since D is unitary, the transformation was isospectral, and |η must have the n th largest eigenvector.

Lemma 2. Specifying a spectrum and a target eigenvector is insufficient to yield a unique solution.
Proof. By uniqueness, we mean choice of the values {J 2 n } -changing the signs of the J n is a triviality which we want to discount. The Hamiltonian where J 2 = − √ 45/J 1 has spectrum 0, ±3, ±5 and the 0-eigenvector is the W -state for two distinct values of J 2 1 :
Requiring H 1 |η = 0 immediately restricts the structure to . We take the two cases of B 2 = 0 and J 2 1 = J 2 4 separately. If B 2 = 0, then we can solve J 2 1 and J 2 4 simultaneously in There are no non-negative solutions. Similarly, for J 2 1 = J 2 4 , one has to simultaneously solve which, again, has no solutions.

ARBITRARILY ACCURATE SOLUTIONS
It is not possible to realise any arbitrary assignment of eigenvalues and a single eigenvector. However, Problem 1 does not require a specific spectrum, only that certain general properties are obeyed. So, is it still possible to select a target spectrum such that the conditions of Lemma 1 are satisfied and a solution to Problem 1 exists? Lemma 4. For any target state |α satisfying the conditions of Lemma 1 and Problem 2, and any sufficiently small prescribed accuracy ε, there exists a time t 0 ∼ 1/ε such that |k → |α to accuracy 1 − O(ε 2 ).
Proof. We start by solving the inverse eigenmode problem [13] without any regard for the eigenvalues, fixing |η to have 0 eigenvalue. Since then provided η n = 0, any choice of J n fixes the B n . So, we just make a choice, say J n = 1. We call this Hamiltonian H η . To correct the spectrum, we follow [15]: • Pick an accuracy parameter ε (smaller than half the smallest gap between eigenvalues in H η ).
• Truncate the eigenvalues of H η to the nearest multiple of ε.
• Shift all the eigenvalues except the 0 value by ± 1 2 ε. The choice of ± does not matter, and can be made in order to minimise the change in the eigenvalues, which need never be larger than ε/4. This ensures that the ordering of the eigenvalues is maintained.
• Take the values { 1|λ n }, where |λ n are the eigenvectors of H η , and use these along with the target spectrum to calculate, via the Lanczos method [12], a new HamiltonianH.
The output,H, is guaranteed to have a spectrum that achieves the desired phases (up to a global phase of −1) in a time t 0 = 2π/ε. A solution to this always exists [12]. While the 0 eigenvector is no longer |η , but |η actual , sinceH is only a perturbation of H η , it should not be significantly different. How different is it? We estimate F = η|η actual as an accuracy parameter (the overlap between the state produced and the desired state is 1 − 2(1 − α 1 )(1 − F ) if the excitation is initially placed on site 1). By construction, F is real since both |η and |η actual are real. If U andŨ diagonalise H η andH respectively, then the calculation of F is equivalent to m| U †Ũ |m where m is the index of the relevant eigenvector: U |m = |η . However, U andŨ must be very similar, so we choose an expansion which maintains unitarity and the limitŨ → U as ε → 0, where K is Hermitian [16]. Expanding for small ε, Since F is real, and the diagonal of K is real, the diagonal of K must be 0, such that we are left with the second order term, as required.
By continuity of the spectral properties of the Hamiltonian, we infer that a perfect realisation must exist. Thus, as a special case, we can create any state with real, non-zero amplitudes on every site of the chain, including states such as the W state. For example, In previous studies of perfect state transfer, it has been deemed acceptable for the arriving state to only be exact up to the application of a phase gate since, for the created state to be any use, one must have some local control at each of the output sites and that would be capable of compensating for the phase. Were one to make the same relaxation here, then any complex state (with non-zero amplitudes) can be realised simply by redefining |α → n |α n | |n first, and then applying local phases R Z (Arg(α n )) on each of the sites at the time t 0 .

Error Scaling
In Lemma 4, we have proven that the error term scales as ε 2 m| K 2 |m , which immediately conveys the ε dependence, but disguises the N dependence. Following [16], we can derive that m| and e m is the difference between the m th largest intended and actual eigenvalues as a fraction of ε. Consider which is N times larger than the average error, and no smaller than the worst-case error. To demonstrate that the scaling is not pathological, we study the special case in which H η has J n = 1 and B n = 0 for all n. This is particularly pertinent to the creation of a W state. We have that To find the eigenvalues, observe that for N > 5, the states span a 3-dimensional subspace in which the Hamiltonian may be represented as The remaining subspace squares to 1/4(N + 1). Thus, the smallest absolute eigenvalue is 1/2 √ N + 1. Hence, G 2 n ∼ N , and the error dependence is O(ε 2 N ) in the worst case, but one anticipates that in typical cases, the dependence on N is much weaker.

ANALYTIC SOLUTIONS
The disadvantage of the previous numerical technique is that the gap between eigenvalues scales as 1/N 2 , which means that t 0 ∼ N 2 . This is much slower than we would like (after all, the longer we wait, the more noise is likely to build up), so it would be advantageous to find the fastest possible solutions. To that end, we provide some analytic solutions for spin chains with the best possible spectrum, which will yield t 0 ∼ N . These solutions don't permit us any control over the target eigenvector (except that different solutions have a different eigenvector), but by finding a solution that is as close as possible to the one that we want, we will be able to select from a number of perturbative techniques to drive the solution towards one that we want.
In fact, [17] restricted the values of α more strongly, but this was because other specific properties of the spectrum were required. [17] also gives the eigenvectors of these matrices in terms of the Hahn polynomials.

Lemma 5.
If we construct the (2N + 1) × (2N + 1) symmetric tridiagonal matrix with 0 on the main diagonal, and off-diagonal couplings that satisfy then this matrix has spectrum 0 and ± k + 2α−1 2 N k=1 , where we use the values from the N × N Hahn matrices of parameter α.
In particular, we will be interested in integer values of α to create the spectrum that we desire (t 0 = 2π). This definition permits us to create a symmetric matrix by looking at the central coupling term J N , since J N = J N +1 . Hence, Proof. Let H 1 be the matrix constructed in this way. Clearly, it anti-commutes with the operator 2N +1 n=1 (−1) n |n n|, meaning that the eigenvalues arise in ±λ pairs, centred on a single 0 value (since the eigenvalues of such a matrix must be non-degenerate , H must have eigenvalues with modulus k + 2α+1 2 , and given that they arise in ±λ pairs, it must have both ±(k + 2α+1 2 ) for k = 0, . . . N − 1. For example, with N = 3 and α = 1, we create the matrix It can be verified that its spectrum is 0, ± 3 2 , ± 5 2 , ± 7 2 , and its 0-eigenvector, up to normalisation, is approximately |1 + |7 + 1.072(|3 + |5 ).
It is interesting to observe that for α = 1, the 0 eigenvector is very close to N +1 n=1 |n / √ N + 1 (the vector that we used for the impossibility proof in Lemma 3) -numerically we have created matrices of (odd) size up to 10003, and the overlap, F , with that target eigenvector is always at least 0.999 (up to some signs which can be corrected by changing the signs of the couplings). Equally this means that the overlap with the W state is approximately 1/ √ 2. Consequently, it can serve as a crude starting for numerical schemes -by judiciously changing the signs of the coupling strengths we can guarantee an overlap with any target state of approximately ( n |α 2n−1 |) √ 2/ √ N + 1 which is never too small.

SPEED LIMITS
For a given target state |ψ T in Problem 1, how small can the synthesis time, t 0 , be made? The choice of spectrum in the above analytic construction was motivated by the insight from perfect state transfer [18] that by compressing the spectrum as much as possible, one achieves the minimum state transfer time for a given maximum coupling strength. Here we prove that those insights carry forward to the different spectral conditions that we impose for the state generation task. The following proof technique represents an improvement over [18] for the case of odd length chains.

Lemma 6. A state generation task satisfying the construction presented in Lemma 1 for a chain of length 2N + 1 requiring time t 0 has a maximum coupling strength
if the Hamiltonian is symmetric (i.e. B n = B 2N +2−n and J 2 n = J 2 2N +1−n ).
Note that our previous construction satisfies this for α = 0 and odd N .
Proof. We remove the freedom of 1 shifts on the Hamiltonian by fixing B N +1 = 0. Having done this, we observe that the imposed symmetry of the Hamiltonian splits the matrix into anti-symmetric and symmetric subspaces with mutually interlacing eigenvalues {µ k } N k=1 and {ν k } N +1 k=1 respectively (ν k < µ k < ν k+1 ). All eigenvalues must have an integer spacing, except for a spacing of 1 2 either side of one special eigenvalue. Let's assume this special eigenvalue is µk. We have that If we use the bounds η k ≥ η 1 + 2(k − 1) − δ k>k and µ k ≤ η k+1 − 1 + 1 2 δ k=k , then one readily derives which is the smallest possible (N 2 − 1 2 ) for the choice η 1 = 1 2 − N . One can follow a similar calculation under the assumption that the special eigenvalue is ηk. In that case, one would derive 4J 2 max ≥ N 2 .
Of course, even for a symmetric target eigenvector, it is not necessary that the Hamiltonian be symmetric, and the method of Lemma 1 is far from unique, so this proof has limited applicability. Variants of this proof can address different assumptions. For example, we can exchange the symmetry assumption for assuming that all the magnetic fields are equal (i.e. 0 up to identity shifts in the Hamiltonian). In this case, Tr(H 2 ) = λ 2 n = 2 n J 2 n , and we relate J n = η n−1 J n−1 /η n+1 . Given all the λ n are separated by at least 2π/t 0 , a similar inequality can be derived, which proves that any solution with B n = 0 and a spectrum 0, ±1, ±3, ±5, . . . is optimal for a solution of this type. When we try to relax the B n = 0 assumption, we run out of sufficient information to make the bound as tight as possible. Nevertheless, by fixing |J n | ≤ J max for all n, one gets thanks to the relation η n−1 J n−1 + η n+1 J n = −η n B n , and assuming η n = 0. More generally, [19] conveys that to generate a non-trivial correlation function between two regions separated by a distance L requires at least a time ∼ L for a fixed maximum coupling strength. For instance, if we consider the two operators O A = Z 1 and O B = Z N , and evaluate then at the start of the evolution, wherever the excitation is initially localised, we have σ(|n ) = 0, while the final state has σ(|α ) = −4α 2 1 α 2 N . Provided α 1 α N is not exponentially small, [19] conveys that J max t 0 ∼ Ω(N ), so the scaling relation is certainly optimal, even without the additional assumptions in Lemma 6.

PERTURBATIVE MANIPULATIONS
The matrices that we have introduced in Lemma 5 might have the ideal spectrum but each has a fixed 0-vector. If we want a different vector, we must apply an isospectral transformation. The following method has proven successful for systems of a few tens of qubits. We start with a Hamiltonian H 1 (couplings J n and fields B n ) and aim to make a new Hamiltonian which has the same spectrum, and whose 0-eigenvector is a better approximation to n η n |n . The first step is to change the signs of the couplings (which makes no difference to the spectrum), to because this minimises the norm of making it as close to a perturbation as possible. We then follow an iterative procedure whereby we take H 1 + δV , with δ = min(1, ε/ V ) for some ε 1, calculate the eigenvectors λ n , and then find a new Hamiltonian (by following the Lanczos algorithm) using the target spectrum and the elements { 1|λ n }. The overall step is isospectral by construction, and should provide a small (O(ε)) improvement in the accuracy of the target eigenvector. Thus, repetition is anticipated to drive us towards a good solution, should one exist. For example, Fig. 1 depicts the evolution of a 21 qubit system which performs the evolution |1 → |ψ where 1 √ 21 21 n=1 n|ψ ≈ 1 − 2 × 10 −15 . With regards to the optimal speed, this example gives that J max t 0 = 4.66 while Eq. (3) specifies that J max t 0 ≥ 4.45; there is little margin for finding a faster solution.

CONCLUSIONS
We have shown that a spin chain can be engineered to create almost any single excitation state from its time evolution (up to local phases) vastly extending their utility. Our results can readily be applied to local free-fermion models (such as the transverse Ising model), or any one-dimensional nearest-neighbour Hamiltonian that is excitation preserving (such as the Heisenberg model).
Any target state with no consecutive zero amplitudes can be realised. To get consecutive zeros, one could examine the technique that [13] specifies for fixing two eigenvectors of a matrix. While this gives no control over the spectrum, the procedure of Lemma 4 can be applied to get high accuracy solution. However, this can give no more than two consecutive zeros [22]. The challenge is to design systems that produce states with many 0 amplitudes, which is likely to require inordinate control over most of the eigenvectors. This is addressed in [20]. n m=1 η 1 m η 2 m and tn = η 1 n η 2 n+1 − η 1 n+1 η 2 n . However, the condition of two consecutive zeros is η 1 n + η 2 n = 0 and η 1 n+1 + η 2 n+1 = 0, which in turn means tn = 0, requiring η 1 n = η 2 n = 0 such that sn = 0. This allows two zeros together, but to add a third consecutive zero would require two consecutive zeros in both eigenvectors.