Dual Separated Variables and Scalar Products

Separation of variables (SoV) is an extremely efficient and elegant technique for analysing physical systems but its application to integrable spin chains was limited until recently to the simplest su(2) cases. In this paper we continue developing the SoV program for higher-rank spin chains and demonstrate how to derive the measure for the su(3) case. Our results are a natural consequence of factorisability of the wave function and functional orthogonality relations following from the interplay between Baxter equations for Q-functions and their dual.


I. INTRODUCTION
The key physical information contained in a quantum system is encoded into matrix elements of operators between Hamiltonian eigenstates, but computing them is not a simple task. To begin with one should carefully choose a coordinate system. Famously, in the case of the hydrogen atom the problem greatly simplifies in spherical coordinates -the wave function splits into six independent one-dimensional factors which allows one to perform many computations analytically.
A possible price to pay for such a simple factorised form of the wave function could come from a complicated integration measure in the scalar product. In the case of the hydrogen atom it is simply r 2 sin θ, but the problem can become rather challenging in general.
Like the hydrogen atom, many integrable models are believed to admit a separation of variables (SoV) basis, where the wave function becomes a product of simple factors. A particularly important model is the su(2) Heisenberg spin chain which is a model of interacting particles on a one-dimensional chain of sites. In its simplest realisation, its Hamiltonian is given by H = −J α σ α σ α+1 , where σ α are the Pauli matrices acting on the site α. * nikolay.gromov@kcl.ac.uk † fedor.levkovich@gmail.com ‡ pryan@maths.tcd.ie § dmytro.volin@physics.uu.se This model is known to be integrable and the separation of variables was worked out by Sklyanin in [1,2]. The integrable structures greatly depend on the underlying symmetry of the system. In recent years, there has been a great interest in studying integrable systems with more general su(N )-symmetries and super-symmetries coming from the AdS/CFT correspondence and integrability in string theory. In particular, the Fishnet model [3,4] is essentially an su(4) rational spin chain and N = 4 SYM is tightly related to the psu(2, 2|4) integrable spin chain.
The general su(N ) Heisenberg spin chain of length L is defined by means of the R-operator R aq Then one builds the monodromy matrix where we assume summation over all repeated indices except b. We also have z 1 z 2 . . . z N = 1. The monodromy matrix is a collection of N 2 operatorsT a b (u) each acting on the physical Hilbert space (C N ) ⊗L . The trace of the monodromy matrix,t(u) = trT (u), known as the transfer matrix, forms a family of mutually commuting operators [t(u),t(v)] = 0. To get a maximal commuting set one should also take the trace in all antisymmetric representations of su(N ), so in general there are N − 1 non-trivialt a (u). We restrict ourselves to su(3) in this paper which is general enough to illustrate our construction while allowing for a relative clarity. Explicitlŷ whereÛ a b (u) = 1 2 ǫ aa1a2 ǫ bb1b2T a1 b1 (u)T a2 b2 (u + i). We see thatt 2 (u) is a polynomial in u of degree 2L. However, U contains a trivial factor Q θ (u − 3i 2 ) where Q θ (u) ≡ L α=1 (u − θ α ) and sot 2 generates only L new commuting operators. In the following we usê .
The same quantities without hats will denote the eigenvalues.

II. BAXTER Q-FUNCTIONS
The integrability of the model promises a number of simplifications. In particular, its spectrum can be computed relatively easily. The modern way of approaching the spectral problem is via Q-functions [5][6][7] (also known as Baxter polynomials) which we introduce in this section. We will also argue that the Q-functions are very convenient building blocks for the wave functions, the set of commuting chargesτ a is simply expressed in terms of these as well.
The basic Q-functions are the twisted polynomials q j (u), j = 1, 2, 3, i.e. polynomial functions up to an exponential prefactor, of the form q j (u) = z iu In the widely used nested Bethe ansatz approach, the roots of q j (u) are the auxiliary Bethe roots. An alternative to the nested Bethe equations and in many ways better method of finding the spectrum of the system is to impose the quantization condition The latter gives L equations on the total L roots of q i (u), selecting the physical solutions. One advantage w.r.t. the conventional Bethe ansatz is that it allows one to count solutions more easily. For example, when all |θ i − θ j | are large, (3) reduces to q 1 q 2 q 3 = Q θ which has 3 L solutions, i.e. equal to the dimension of the Hilbert space.
As the dependence on the parameters θ α should be continuous, except for some special points, this leads to the completeness of the equation (3). For more detailed and mathematically rigorous discussion see [8,9]. As we mentioned above, the zeros of q i are what are called the auxiliary Bethe roots in the Bethe equations. The momentum-carrying Bethe roots are zeros of the dual functions q i introduced as The normalization coefficient in (4) is such that q j (u) = z −iu j (u L−Mj + . . . ). Transfer matrices are reconstructed from the Q-functions using simple contractions The last formula suggests that τ i are Hermitian conjugates of one another which is indeed the case if twists z j are pure phases and inhomogeneities θ α are real. Finally, we shall need later the following special values of τ a (u) following from (5): and III. SEPARATION OF VARIABLES a. SoV basis. Motivated by the SoV construction in the classical limit [10], Sklyanin realised in [11] that the operator should play an important role in quantum separation of variables for the model. However, the precise understanding on how the SoV procedure should work was only recently obtained in [12], where several important observations were made: First, the Sklyanin's construction remains intact under the replacementT →T g ≡ g −1T g, where g is some constant SL(3) matrix. This replacement makesB(u) diagonalisable for generic enough g 1 and so its spectrum and eigenvalues become interesting quantities to consider. Secondly, the spectrum of B g (u) is non-degenerate and has the following remarkably regular structure. Namely, forB g = ΛB g , where Λ = Λ 0 Q θ (u − 3i/2) is a trivial scalar factor that does not depend on the state, the eigenvalues ofB g are given by where integers m α,a satisfy 0 ≤ m α,1 ≤ m α,2 ≤ 1. The operatorsB(u) commute with each other for different values of u [11]. The same holds true forB g (u) and thus eigenstates ofB g (u) do not depend on u. We denote its left eigenstates as x|, labelling them by the values of m α,a . One can then unambiguously define 2L commuting operatorsX α,a such thatB Finally, it was observed in [12] that the eigenstates of transfer matrices can be constructed using the operator B g (u) as follows where u i are the roots of the twisted polynomial q 1 (u) and |Ω = δ p1 1 δ p2 1 . . . δ pL 1 is a "ferromagnetic vacuum" of the model.
By combining (10) with the definition ofX α,a we get a factorized representation of the wave function [12] and so x| form an SoV basis. In (11) we impose the following normalization for x| s.t. x|Ω = a,α z −iXα,a 1 . While some of the observations of [12] were conjectured based on numerical evidence or for short spin chains or small number of magnons, they received a complete analytical proof in [14,15]. In particular, it became clear that the spectrum ofB g (9) originates from the structure of the Gelfand-Tsetlin algebra [15]. It would be interesting to examine if such a structure is also present in the separated variables considered in [16].
An important observation can be made about the action of the transfer matrices at special values:τ . Due to the relation (6) it is clear that acting on the state x| they would replace one factor of q 1 (θ α − i 2 ) by q 1 (θ α + i 2 ) in the r.h.s. of (11) and thus they play the role of the creation operators for the basis x| [17]. More precisely where 0| is the eigenstate ofB g with all m α,a = 0. This observation demonstrates the equivalence with a more recent approach of [18], where an analog of (12) was taken as the starting point, and it generalises beyond fundamental representation [15]. In the approach of [18] one can avoid discussing completeness of the quantization conditions, such as Bethe equations. While for original Bethe equations completeness is a notorious obstacle, using the elegant condition (3) instead removes this difficulty. b. Dual SoV basis. Now we would like to build an SoV representation for the bra-eigenstates Ψ n | of the transfer matriciest a . The first natural guess would be to apply Hermitian conjugation, however it proves to be more fruitful to do dualisation of monodromy matrices instead. It is done by the so-called antipode map which sends the monodromy matrix considered as a 3×3 matrix with non-commutative entriesT a b to its inverse. To explicitly compute the inverse we notice thatÛ T looks like the adjunct matrix forT and, indeed, it satisfies a quantum analog of the Cramer's formula Employing it we compute howB(u) transforms under the antipode map and obtain, with convenient adjustment of normalisation and shift of u, a new operator which is one of the main results of this paper 2 . Remarkably, the only difference betweenĈ(u) andB(u) is in the shifts of the spectral parameter, meaning that there is no difference in the classical limit.
We found that essentially the same facts are true for C(u) as forB(u). One again performs the replacement trickĈ(u) →Ĉ g (u) and introduces C g by removing the trivial non-dynamical factor, C g (u) ∝Ĉ g (u)/Q θ (u − i). Due to commutativity [Ĉ g (u),Ĉ g (v)] = 0,Ĉ g (u) has uindependent eigenvectors dubbed |y . Furthermore, this right basis |y does indeed factorise the left eigenfunctions of the transfer matricies.
The eigenstates |y can also be built in spirit of (12) but in a slighly modified form, similar to the construction of [15] for a spin chain in the anti-fundamental instead of fundamental representation. Indeed, we found that the results of [15] apply but for the right eigenstates where |0 is the eigenvector ofĈ g with n α,a = 0. We then introduce another set of separated variableŝ Y α,a by specifying their eigenvalues on the above states, namely byĈ where With these variables at hand, we factorise the transfer matrix eigenstates Ψ| exactly as it was done for |Ψ in [15] for the anti-fundamental representation. By computing the overlap Ψ|y and using (15) we obtain 2 Curiously, a similar operator also denoted C(u) appears at an intermediate step of a technical calculation in [11]. However, none of its crucial properties that we describe here were discussed there.
Next, normalizing the states Ψ| so that (18) and using (6) and (7), we conclude (19) c. SoV-charge operator. Since the operators B g (u) and C g (u) only differ by shifts in their definitions (8) and (13), they become related at large u. In particular, their first two terms of the large-u expansion are exactly the same. While the leading term is proportional to the identity matrix, the subleading coefficient defines the SoV-charge operatorŜ (20) S commutes with bothB g (u) andĈ g (u) by construction, and it counts the number of "excitations" in the SoV states: d. Scalar product in the SoV basis. Our goal is to express the scalar product in SoV variables in a closed form. For any two bases |y and x| one can write where the measure M x,y is the inverse transposed matrix of the overlaps x|y . Without making any calculation, we can make an important observation about the matrix x|y -existence of the SoV-charge operatorŜ implies that only the matrix elements with the same excitation numbers n α,a = m α,a can be non-zero. In particular, the ground state 0| should be also an eigenstate of C(u), and, as the spectrum of C(u) is non-degenerate, this means that 0|y ∝ δ 0,y and similarly x|0 ∝ δ x,0 , which also implies that M x,0 ∝ δ x,0 and M 0,y ∝ δ 0,y .
As in general the spectrum ofτ 1 andτ 2 is not degenerate, their left and right eigenstates are orthogonal Ψ A |Ψ B = N 2 A δ AB . Using the SoV basis, we then have where q A j and q j B are the Q-functions corresponding to the eigenstate Ψ A and Ψ B .

IV. FUNCTIONAL ORTHOGONALITY RELATION
Now we shall consider the orthogonality question and reproduce (23) following the method of [19,20]. The starting point is the two Baxter TQ-relations. With the help of the finite difference operator where D ≡ e i∂u , the both Baxter relations are written in a unified way where arrows indicate the direction in which the shift operator acts: . The orthogonality conditions shall be now built using the following simple fact where the measure µ(u) is an i-periodic analytic function, f and g are analytic and the contour is a large enough circle which is easily demonstrated by shifting the contour of integration. In particular we have where β ∈ Z and the indices A and B indicate the eigenstates of the transfer matrix. Note that the finite difference operator O B itself depends on these states through the eigenvalues τ a of the transfer matrices. The integrand has 2L poles at θ i ± i 2 . These poles are cancelled by the trigonometric polynomial L i (e 2πuβ + e 2πθiβ ), meaning that there are only L linearly independent exponents one can insert and thus one can restrict β = 1, . . . , L. From (25) we obtain q j where ∆τ a = (−1) a (τ A a −τ B a ) = α ∆I a,α u α−1 . We take i = 1 and j = 2, 3, which gives (28) The equation det M = 0 (for A = B) is the functional orthogonality relation. To relate it to our operatorial SoV construction we compute the integral by residues. If one first performs a simple linear transformation e 2πuβ → γ =β e 2πu − e 2πθγ , which changes M →M but does not affect the determinant value, the new i-periodic factor would cancel all the poles except the ones at u = θ β ± i 2 and the result of integration is M (a,α),(b,β) equal to +q A 1 (θ β + i/2) Let us see that detM (A, B) has exactly the form of the r.h.s. of (23)! Indeed, we are guaranteed to get a sum of terms each containing a product of 2L factors q A 1 (θ β ± i/2). Now, if we fix some combination of 2L ± signs, we are left with a determinant containing q 1+b B (θ β ± i) and q 1+b B (θ β ) with dependence on b contained only in the index of the Q-function, meaning that the final expression will be anti-symmetrized in b for each given β, but the only antisymmetric in b combinations of q B 's are the factors of the type q 2 . The remaining coefficients are some combinations of θ's. Now we show that up to an overall rescaling ofM .
To this end consider the equation (23) for A = B as a set of 3 L × (3 L − 1) linear equations on 3 L × 3 L quantities M x,y . Furthermore, we can fix 3 L variables M x,0 = cδ x,0 , making the number of unknowns and equations to be the same. This means that there should be a unique solution for M x,y up to an overall rescaling, parametrized by the constant c. Coefficients of the expansion of detM over the SoV wave functions (11) and (19) constructs this solution for us. Finally, to fix the overall constant c we can take (23) for Ψ A = Ψ B = Ω. Using that the l.h.s. for this state is 1 and all Q-functions are trivial, one can find the constant c too.
We conclude that by using the orthogonality relations following from the Baxter TQ-equations we can completely fix the measure and thus obtain the scalar product in separated variables (29) 3 . Note that (29) would hold even when both states are "off-shell" that we define as states (11) and (19) but for q a and q a not satisfying the quantisation condition (3).

V. CONCLUSIONS
In this paper we constructed SoV bases for both bra and ket states, with a relatively simple overlap, providing a measure for the scalar product. We also showed how to find the SoV measure based on the method of [19], which bypasses an explicit operatorial computation and allows us to extract the result from a simple determinant. In a similar way one can compute matrix elements of a large class of operators such as B(u), C(u) andt a (u), which are expected to generate the full algebra of observables. Further generalisations of our results will be reported in [21]. Finally, it would be interesting to understand the relation between (29) and Gaudin norms [22] and recent results involving Gaudin matrices [23].