Jordan blocks and the Bethe ansatz I: The eclectic spin chain as a limit

We present a procedure to extract the generalised eigenvectors of a non-diagonalisable matrix by considering a diagonalisable perturbation of it and computing the non-diagonalisable limit of its eigenvectors. As an example of this process, we compute a subset of the spectrum of the eclectic spin chain by means of the Nested Coordinate Bethe Ansatz. This allows us to show that the Bethe Ansatz of the finitely twisted spin chain contains enough information to reconstruct the generalised eigenvectors of the eclectic spin chain.


Introduction
Non-Hermitian systems have a plethora of applications in physics, ranging from optics to critical phenomena, even appearing in transport phenomena in biological systems, see [1,2] and references therein for particular instances. In spite of that, they are not as widely studied as Hermitian systems due to their complexity. Non-Hermitian systems are even less studied in the context of quantum mechanics, where requiring the Hamiltonian operator to have real eigenvalues and being bounded from below is usually taken care of by demanding its hermiticity. However, being Hermitian is a sufficient condition for a matrix to have real eigenvalues, but it not a necessary condition. In fact, it has been shown that the Dirac-von Neumann axiom regarding the hermiticity of the Hamiltonian can be relaxed to the existence of an anti-linear operator that commutes with the Hamiltonian (for example, PT symmetry) [3,4,5,6].
Logarithmic Conformal Field Theories deserve a special mention, as it has been proven that the existence of correlation functions with logarithmic singularities is fundamentally tied to the Virasoro operator L 0 developing Jordan cells (which cannot happen if L 0 is Hermitian) [7].
Non-Hermitian systems have also been studied in the context of integrability. Some examples are Baxter's Z N Hamiltonian [8], Toda field theories with imaginary coupling [9], non-Hermitian extensions of Calogero-Moser-Sutherland models [10], the Ising model in the presence of an imaginary magnetic field [11], and quantum many-body models in contact with a Markovian environment [12]. More extreme cases where non-hermiticity leads to non-diagonalisability are the loop models studied in [13] and the U q (sl (2)) open spin chain when q becomes a root of unity [14]. In addition, non-diagonalisable integrable structures seem to be relatively prevalent in the classification of R-matrices performed in [15].
A context where non-Hermitian Hamiltonians are especially common is in perturbation theory. Given two Hermitian operators A and B, we can study the diagonalisability of the operator H(λ) = A + λB and the consequences of analytically continuing the parameter λ to complex values. We assume that A and B do not commute, as the problem would be trivial otherwise. Notice that considering complex values of λ makes the operator H(λ) non-Hermitian, which opens the door to some additional structures.
It is well-known that there may exist points λ 0 such that H(λ 0 ) has degenerate eigenvalues. In contrast with Hermitian matrices, non-Hermitian matrices can develop an additional kind of degeneration. In particular, non-Hermitian matrices that depend on continuous parameters may have one or more points where two or more eigenvectors coincide. We call these points exceptional points, and we say that eigenvectors coalesce to a single eigenvector, following the terminology established in [16]. Obviously, both eigenvalues and eigenvectors cannot be analytic functions of λ around exceptional points. Rather, they can be described in terms of a fractional power series (related with the number of eigenvectors that coalesce) called Puiseux series, see appendix B of [17] for the explicit derivation in the case of Jordan cells of size 2. The properties of these series are discussed in [18].
In recent years, the interest in non-Hermitian physics has grown in the integrability community. This interest is driven mainly by a particular deformation of AdS 5 × S 5 called fishnet theory, proposed in [19], which is described in terms of a non-Hermitian Lagrangian. In order to construct the fishnet theory action, we first deform N = 4 Super-Yang-Mills (SYM) theory by twisting the product of the fields in the Lagrangian. This theory is called γ i -deformation, and it has proven to be integrable [20,21] but not conformal [22]. Strongly twisted models are obtained when one considers the limit of infinite twist, γ i → ∞, and vanishing coupling, g → 0, while keeping their product constant, γ i g = ξ i . This limit is very interesting, as the theory recovers the conformal symmetry. The most general of these strongly twisted theories has the following interaction Lagrangian (up to relabelling of the fields) where φ j are complex bosonic fields and ψ j are fermionic fields. The gauge fields and the fourth fermion decouple in this limit. The fishnet theory is a particular case of this strongly twisted Lagrangian where all but one of the deformation parameters ξ i are set to zero.
Direct inspection of the Lagrangian (1.1) is enough to realise that strongly twisted theories are non- Hermitian. This implies that the dilatation operator associated with the conformal symmetry of the theory is no longer Hermitian for general values of the deformation parameters. It has been observed that not only do the eigenvalues associated with the dilatation operator take complex values, but that it also becomes non-diagonalisable. Similarly to N = 4 SYM theory, we can express the action of the dilatation operator on single trace operators at one-loop in terms of the nearest-neighbour Hamiltonian of an effective spin chain [23]. The effective spin chain that describes the action of the one-loop dilatation operator on single trace operators made of scalars was studied in [23,24]. To that end, the authors solved the Bethe equations for the case of finite values of the twist parameter q i and computed the large twist limit. Although they were able to find the correct number of Bethe roots, they were not able to find all the Bethe vectors. The reason for it was that several Bethe vectors have the same limit at strong twist.
They denote those particular configurations as locked states, as they all have the relative positions of the excitations fixed. They find that these states are indeed eigenstates of the dilatation operator, but they do not exhaust the number of true eigenvectors of the dilatation operator. This disappearance of eigenvectors might seem surprising at first sight, but it is actually the same effect as the coalescence of eigenvectors we described before for perturbation theory. It is a consequence of the non-hermiticity of the one-loop dilatation operator for general values of the deformation parameters γ i , and the fact that the limit we are considering is an exceptional point of it.
While finishing up writing this article, [25] was published, which extends the counting of the eigenstates done in [24] to any number of excitations by means of combinatorial arguments. While they perform the counting only on the hyper-eclectic spin chain (that is, effective spin chain associated with ξ 1 = ξ 2 = 0), they argue that the counting has to be the same for the eclectic spin chain. Although this creates a substantial overlap between their results and the ones we present here, the method we used to obtain those results is entirely different. Thus, the two articles complement and reaffirm each other.
Although there is a wide range of theorems on the analytic properties and behaviour of eigenvalues when approaching an exceptional point, we were not able to find as many for properties of the eigenvectors. The most relevant one we found was proposition 4.3 in [14], where the authors provide a recipe for computing generalised eigenvectors in a particular model with Jordan cells of size two. However, this proposition was crafted specifically for the model they were studying and a situation that only involves Jordan cells of size two.
In this article, we plan to examine in detail what happens to the eigenvectors of a diagonalisable matrix when we approach an exceptional point, and provide a method to extract the generalised eigenvectors at the exceptional point from their analytical properties near this point. Although our results will be general, here we will only apply them to the so called eclectic spin chain from [23,24], which describes the one-loop dilatation operator of strongly twisted models. We will construct the explicit form of the eigenvectors of this dilatation operator for generic values of the twist using the Nested Coordinate Bethe Ansatz and check that we can find all the eigenvectors and generalised eigenvectors of a specific subsector using our method. This result also gives us a first-principle explanation for the success of proposition 4.3 in [14].
The outline of this article is as follows. In section 2 we review the connection between the strong twist limit of N = 4 SYM and the eclectic spin chains introduced in [23]. In section 3 we recall some results In section 6 we apply our results to Bethe vectors of the eclectic spin chain Hamiltonian. In section 7 we present some conclusions and future directions.

Mills and eclectic spin chains
A very well-known procedure to compute the conformal dimension of operators in N = 4 SYM is to construct an effective spin chain whose Hamiltonian acts in the same fashion as the dilatation operator acts on single-trace operators. A detailed review of this method can be found in [27].
This procedure can be extended to the γ i -deformation, and thus to the strong twist deformation of N = 4 SYM. The details of it can be found in [22,23,24,26]. Here we will follow the expressions from [24]. In this article, we will care only about single-trace operators made of either of the three scalar fields with no derivatives. If we identify the scalar field φ i with the spin state |i , the dilatation operator acting on a single trace operator involving a total of L scalar fields takes the form where D 0 is the bare dimension of the operator (which is equal to L for scalar operators), andP a,b is an operator that acts non-trivially on sites a and b as followŝ while the remaining matrix elements are zero. In addition, as we are interested in single-trace operators, this means that we will be working with closed spin chains with the periodic identification L + 1 ≡ 1.
If we consider the single trace operator consisting solely of φ 1 as the vacuum state of the effective spin chain, scalars φ 2 behave as right-moving excitations while scalars φ 3 behave as left-moving excitations.
We will denote the total number of excitations, i.e. the number of φ 2 and φ 3 fields in the operator, by M , and the total number of φ 3 fields in the operator by K. Without any loss of generality, we can whereP a,b acts non-trivially on sites a and b as follows Although this Hamiltonian is Hermitian only if the twist parameters are complex phases, it is diagonalisable for generic values of said parameter (see footnote 2 in [24]).
As we have commented in the introduction, we can recover the eclectic spin chain Hamiltonian by considering the following limitĤ but we are not able to span the complete Hilbert space with the vectors generated through the limit ǫ → 0 of the eigenvectors ofH. If one computes the Bethe vectors for finite value of the deformation parameters and compute their ǫ → 0 limit, these vectors coalesce to states of the form where L is the total number of sites and U is the operator that shifts the configuration by one site to the left, i.e., U |n 1 , . . . , n L−1 , n L = |n 2 , . . . , n L , n 1 . Due to their form, where the excitations cannot move from their relative places, these states have been called locked states. Although we can check that they are eigenvectors of the HamiltonianĤ ξ1,ξ2,ξ3 , we can also check that these cannot be all the eigenvectors of this Hamiltonian. Thus, naïvely computing the limit at the level of the eigenvectors not only gives us zero information about the generalised eigenvectors of the Hamiltonian, but also it does not provide us with the full set of eigenvectors.
In the following sections we will show that all we have to do to retrieve this information is to compute limits of linear combinations of eigenvectors rather than just computing the limit of eigenvectors.

Non-diagonalisable matrices, generalised eigenvectors and Jordan normal form
In this section, we review some key concepts about non-diagonalisable matrices. The aim is to introduce the notation we will use throughout this article and highlight some results that will be relevant.
Given a matrix M , we say that λ i is an eigenvalue of M with algebraic multiplicity n i if it is a zero of degree n i of the characteristic polynomial of M , that is, if where I is the identity matrix. We say that v i is an eigenvector of M associated with the eigenvalue λ i if The total number of linearly independent vectors that fulfil this equation, i.e. the dimension of Ker(M − λ i I), is called geometric multiplicity of λ i . It is easy to check that the geometric multiplicity is always equal or smaller than the algebraic multiplicity.
A matrix is non-diagonalisable or defective if the geometric multiplicity of one or more of its eigenvalues is smaller than their algebraic multiplicity. This means that there exist fewer eigenvectors than we expected from the characteristic polynomial. As we cannot diagonalise this kind of matrices, the next best thing we can do in this situation is to "make up for the missing vectors" in some fashion. This is done by means of Jordan chains. Given a defective eigenvalue λ i and an eigenvector v (1) i,α associated with it, we define the generalised eigenvector of rank n as the vector fulfilling where the index α labels the possible geometric multiplicity of the eigenvalue λ i . For simplicity, we will usually drop the (1) superindex for eigenvectors. It is trivial to prove that the generalised eigenvectors i,α = 0. In addition, the vectors that form a Jordan chain are linearly independent but not necessarily orthogonal.
Thanks to these additional linearly independent vectors, we can write a similarity transformation that changes a given square matrix into a block diagonal matrix of the form This matrix is called the Jordan normal form of M , and each block is called Jordan block or Jordan cell. 1 The size of each Jordan cell is equal to the number of vectors that form the Jordan chain associated with it.
Notice that, similarly to what happens in diagonalisable matrices with degenerated eigenvalues, the linear combinations of two generalised eigenvectors of rank p associated with the same eigenvalue is also a generalised eigenvector of rank p, but the generalised eigenvector of rank p − 1 it is associated with changes. On the other hand, the linear combination of a generalised eigenvector of rank p and a generalised eigenvector of rank 1 associated with the same eigenvalue is also a generalised eigenvector of rank p that is associated with the same generalised eigenvector of rank p − 1 as the original.
Among the properties of Jordan chains, in this article we will make an extensive use of the following theorem Theorem 3.1. The number of generalised eigenvectors of rank l is never larger than the number of true for any two n < l.
However, some of the vectors in Ker (M − λI) l only fulfil this property because (M − λI) l−1 w = 0 and 1 Jordan normal forms can also be constructed for matrices of generic dimension N × M , however, we will only consider square matrices in this article.
should be disregarded. This means that the vector space Ker (M − λI) l /Ker (M − λI) l−1 is isomorphic to a subset of (M − λI), which gives us the previous result by comparing their dimensions.
The second part of the theorem can be proven similarly.
We should stress again that, despite being linearly independent, the generalised eigenvectors of a non- Left and right eigenvectors coincide for Hermitian matrices, but that is not the case for non-Hermitian ones. In fact, the left and right eigenvectors associated with the same Jordan block are orthogonal. This property is usually called self-orthogonality, and it can be used to find exceptional points.
By introducing the concept of left eigenvectors, we can generalise the orthogonality relations that hold for Hermitian matrices to the following biorthogonality relations where n j,α is the size of the Jordan block associated with the α-th geometric multiplicity of λ j .
Proof. On the one hand, we have that a left and a right eigenvector associated with two different eigenvalues are orthogonal. The proof is similar to the proof of the orthogonality of eigenvectors in Hermitian matrices as we are considering λ i = λ j , the only possibility is thatv i,α and v j,β are orthogonal. Now, we need to generalise this result to encompass generalised eigenvectors. To that end, we need the following two relations With these relations, we can show by induction that a set of left generalised eigenvectors are orthogonal to a set of right generalised eigenvectors if they are associated with different eigenvalues λ i = λ j .
The next step is to consider what happens with generalised eigenvectors associated with the same eigenvalue. For that, we have to consider the following relation If we consider the case of k = p − 1 and l > 0, we have that an eigenvector is orthogonal to all the generalised eigenvectors except for the ones of maximal rank. If the geometric multiplicity of the eigenvalue is one, we are done: we have proven that the left eigenvectorv giving us zero. If the geometric multiplicity is different from one, the proof is similar but requires a process akin to Gram-Schmidt orthogonalisation that takes advantage of the fact that the linear combination of a generalised eigenvector of rank p and a generalised eigenvector of rank 1 associated with the same eigenvalue is also a generalised eigenvector of rank p.
We should emphasise that biorthogonality does not imply thatv i,α due to the generalised eigenvectors not being an orthogonal basis. However, there are two important exceptions Corollary 3.2.1. The following equalities between left and right generalised eigenvectors hold:v Proof. Because the generalised eigenvector form a basis, we can expand a left generalised eigenvector in If we apply biorthogonality, we find that the coefficients m have so satisfy the relation As the generalised eigenvectors are not orthogonal, we cannot set m to a Kronecker delta.
The only exception to the above statement are the generalised eigenvectors of rank 1, that is, the eigenvectors. As the linear combination of a generalised eigenvector of rank q and an eigenvector is a generalised eigenvector of rank q (notice that this will require to linearly combine the generalised eigenvector of rank q + 1, if it exists, with a generalised eigenvector of rank 2, so equation (3.3) keeps holding for this generalised eigenvector), we can linearly combine generalised eigenvectors in such a way that they are orthogonal to all the generalised eigenvectors of rank 1. Using this result, we can set m to a Kronecker delta when p = 1, meaning thatv Expanding a right generalised eigenvector in terms of left generalised eigenvectors and repeating the same steps gives us the other relation.
where J is the total number of different eigenvalues, m g j is the geometric multiplicity of the eigenvalue λ j and n jα is the size of the Jordan cell associated with the α-th eigenvector associated with λ j .
The proof of this corollary is immediate using the definition of generalised eigenvectors and the biorthogonality property.
In the context of non-Hermitian matrices there exist two different definitions for the norm of an eigenvector. On the one hand, we can define it using the usual inner product in a complex vector space, that is, |v i,α . This is the definition we will be using in this article. On the other hand, we can define it using left and right eigenvectors, |v i,α . Although we will not use this definition in the article, it is extremely useful to find exceptional points of non-Hermitian matrices, as this norm is non-vanishing for diagonalisable matrices but it vanishes for non-diagonalisable matrices due to self-orthogonality.

Exceptional point with distinguishable Jordan blocks
In this section, we will consider the case where several eigenvectors coalesce in such a way that the Hamiltonian at the exceptional point is composed of Jordan blocks with pairwise different eigenvalues.
We will show first the particular case of the Hamiltonian associated with an eclectic spin chain of length 3 and two excitations with different flavour. All the Jordan blocks in this case are of size two, so a slight modification of proposition 4.3 from [14] is enough to compute everything. After this example, we will present a method that works for any defective matrix with Jordan cells of any size and pairwise different eigenvalues.

A specific example: three-sites chain with three different excitations
As a warm-up example, we will consider the simplest case where the strongly twisted limit of the The general wavefunction on this sector can be written as where S 3 is the symmetric group over a set of three elements. The twisted Hamiltonian (2.3) transforms even elements of S 3 into odd elements of S 3 . Thus, it can be represented as the following anti-blockdiagonal matrixH This matrix can be diagonalised for generic values of the twist parameters, with eigenvalues Notice that the sums include the case i = j. Due to the rescaling needed to keep the Hamiltonian finite in the strongly twisted limit, these three eigenvalues vanish in said limit, E ± i = lim ǫ→0 ǫλ ± i = 0. Let us consider the eigenvectors associated with the eigenvalues λ ± 1 , as they are the simplest ones. They take the following form when properly normalised with all the ± signs correlated. Substituting q i → ξi ǫ , it is easy to check that both vectors become the same up to a sign in the limit of large twist A similar situation occurs for the other four eigenvectors: they become pairwise proportional in the limit of vanishing ǫ. This happens because the Hamiltonian becomes strictly upper triangular in this limit and, thus, defective. In fact, the Hamiltonian becomes similar to a matrix composed of three Jordan blocks with eigenvalues equal to zero, and the three vectors we obtain through this procedure coincide with the true eigenvectors of this matrix.
After taking the large twist limit, we lose access to half of our Hilbert space, as this example is a particular case of the coalescence of eigenvectors shown in [24]. We will argue below that this is not strange because the three remaining vectors of the Hilbert space are not eigenvectors of the strongly twisted Hamiltonian, but generalised eigenvectors of rank 2. Then, this begs the question whether we can extract these generalised eigenvectors at large twist from the eigenvectors at finite twist.
A similar situation was observed in the context of open spin chains with U q (sl (2)) symmetry, which become defective when q becomes a root of unity. In that case, it was shown that the generalised eigenvectors can be recovered [14]. To do so, the authors did not only have to consider the limit of the eigenvectors of the Hamiltonian, but also the limit of the eigenvectors after subtracting their leading contribution and multiplying by the appropriate factor to make the limit non-vanishing. Although the method proposed in that article works for the case of Jordan blocks of size two, it has some issues if we try to extend it for larger Jordan bocks. We will show that the correct method involves the difference of two eigenvectors instead. For the example we are considering here, let us take again the two simplest Although in this particular case all dependence on the twist parameters disappears after normalising, this will not happen in general. We can check that the vector we have obtained through this procedure is linearly independent of the eigenvector (1, 1, 1, 0, 0, 0) we obtained previously. Not only that, but we can also check that it is the generalised eigenvector of rank 2 of the strongly twisted Hamiltonian associated with it, up to a scalar factor. If similar computations are done for the remaining coalescing pairs of eigenvectors, we will find the two remaining generalised eigenvectors.
As we said above, in this section we will focus on the case of distinguishable Jordan blocks. By that, we mean that all the eigenvalues have geometric multiplicity one. We should point out that the example we have studied might not seem of this type, as the three Jordan blocks have vanishing energies, but it is. Although we cannot distinguish the three Jordan blocks through the energy, they are associated with different values of the total (twisted) momentum operator. The exponential of the momentum operator is the shift operator U we introduced previously. We can check that it takes different values in the three different Jordan blocks. In addition, U commutes with the HamiltonianĤ, so vectors with different momenta cannot mix and the three Jordan blocks are effectively distinguishable.

The defective limit of a diagonalisable matrix: distinguishable Jordan blocks
Although the observation that generalised eigenvectors could be obtained as the next-to-leading order correction of coalescing eigenvectors when we compute the defective limit of a matrix is not completely new, there does not seem to be a first-principle justification for it in the literature. In the following lines we plan to show that the reason why the (apparently ad hoc) method from [14] works is not accidental, but it is based on linear algebra and analyticity results.
Let us consider a N × N complex matrix that depends on a parameter ǫ, M (ǫ). We will assume that such a matrix is diagonalisable for all values of ǫ except for a finite set of values. In particular, we will assume that the matrix is not diagonalisable for ǫ = 0, where it becomes a defective matrix similar to a single Jordan block of size N . Outside its exceptional points, we will denote the eigenvalues and eigenvectors of M (ǫ) by λ i and v i respectively. Although they depend explicitly on ǫ, we will not explicitly write that dependence most of the time as it will clutter our expressions.
Instead of computing the eigenvalue, eigenvector and generalised eigenvectors of M (0) directly, we want to see if we can obtain the same information by computing the ǫ → 0 limit of the eigenvalues and eigenvectors of the diagonalisable matrix M (ǫ). It is immediate to see that all the eigenvalues have to become equal, because M (0) is similar to a single Jordan block.
It is immediately clear that a non-diagonalisable matrix has to have all its eigenvalues equal. As we are assuming that M (0) is similar to a single Jordan block, we have that for any two eigenvalues λ i and λ j of M (ǫ).
The fact that all eigenvalues approach the same value as the matrix becomes defective has important implications for the eigenvectors. If we consider the limit of the eigenvector equation, we find that Jordan block, all the eigenvectors of the diagonalisable matrix M (ǫ) have to approach the same vector (up to a possible normalisation factor) when ǫ approaches zero. Therefore, the "collapse of Bethe vectors" described in [24] is just a natural consequence of the matrix becoming of Jordan form. Nevertheless, we should stress that it is only a sufficient condition. Although this has no consequence now, because the kernel of [M (0) − I lim ǫ→0 λ i ] is one-dimensional, this will pose a problem when two or more Jordan blocks have the same eigenvalue.
After obtaining information about eigenvalues and eigenvectors, we may wonder if it is also possible to find the generalised eigenvectors by computing limits of eigenvectors. To answer this question, we first have to realise that a linear combination of eigenvectors fulfils the following equation for any constants α 1 and α 2 , which may depend on ǫ. This equation can be immediately generalised to more general linear combinations. Although it may seem weird to consider linear combinations of eigenvectors associated with different eigenvalues, we need to have in mind that said eigenvalues become identical in the limit we are interested in. In fact, when we compute the limit of the above equation, we find where λ i is any of the eigenvalues of M (ǫ). This equation implies that the limit of the linear combination contains information regarding the generalised eigenvector of rank 1 (the true eigenvector) and the generalised eigenvector of rank 2. That is, we need to consider the limit of linear combinations of eigenvectors whose associated eigenvalues approach the same eigenvalue if we want to find information about generalised eigenvectors.
Despite equation (4.12) assuring us that the linear combination of eigenvectors contains information about the generalised eigenvector of rank 2, this information is not immediately accessible. If we consider general values of the coefficients α i , the linear combination is just a generalised eigenvector of rank 1.
This happens because where the last equality comes from the two eigenvalues coinciding at the exceptional point. Thus, the limit of the linear combination is proportional to the true eigenvector of M (0). If we want to find the generalised eigenvector of rank 2, we need to find vectors that are part of This means that we need to make the right-hand side of (4.13) non-vanishing, which can be accomplished by considering a coefficient α 2 that diverges as (λ 2 − λ 1 ) −1 .
Repeating the argument with [M (ǫ) − λ 2 I] we find that α 1 has to diverge in the same fashion. Despite is equally effective, as both quantities vanish at the same rate, 2 and will prove more useful for later generalisation. For this vector to be well-defined, we need that lim ǫ→0 (λ 2 − Inverting the argument, if we find two finite coefficients α 1 and α 2 such that they do not vanish in the gives us the generalised eigenvector of rank 2.
Furthermore, we have to take into account that, even if two vectors that coalesce are normalised to the same value, they can still differ by a phase factor, i.e., lim ǫ→0 v i = e iφ lim ǫ→0 v j . Consider then the following two linear combinations where |v| 2 = v † · v is the usual vector norm. We can check that lim ǫ→0 w + ij is equal (up to a phase factor) to lim ǫ→0 v i , while lim ǫ→0 w − ij becomes a 0 0 indeterminate form. Thus, the latter provides the generalised eigenvector of rank 2 when we apply L'Hôpital's rule. This result provides a first-principle explanation for the proposition 4.3 in [14].
We should stress that, by applying the same reasoning we used for the collapse of the eigenvectors, every lim ǫ→0 w − ij is equal to the generalised eigenvector of rank two regardless of the vectors v i and v j we used. This is consistent with theorem 3.1, which implies that having exactly one true eigenvector means that we can have one generalised eigenvector of rank 2 at most.
From this result it is evident how to continue forward: if we want to find generalised eigenvectors of higher rank, we need to consider linear combinations involving a higher number of eigenvectors of M (ǫ). This is because the limit of the linear combination of n eigenvectors will fulfil the equation for any constants α i , meaning that the limit of these kinds of linear combinations has information on generalised eigenvectors up to rank n. In addition, the previous method for obtaining the generalised eigenvector of rank 2 can be immediately generalised by substituting the vectors v i by the appropriate We should be careful because the ǫ → 0 limit of w does not give us the generalised eigenvector of rank n + 1, but a linear combination of generalised eigenvectors up to rank n + 1 (with a nonvanishing coefficient for the generalised eigenvector of rank n + 1). The reason behind it is that we are actually solving the weaker condition [ . In addition, notice that we can only repeat this process up to because M (ǫ) only has N independent eigenvectors. This is expected, as M (0) can have only N generalised eigenvectors.
We still have to answer the following three questions regarding this recipe for reconstructing the Jordan block: Are all the vectors we get non-zero? Are all the vectors we get independent? Do we get all the generalised eigenvectors?
ij is neither zero nor diverges and it is orthogonal to any lim ǫ→0 w Proof. The proof for the first statement goes as follows: The vectors w (n) are normalised for any value of ǫ. Because addition and squaring commute with computing limits, the limit of the norm is equal to the norm of the limit. Thus, lim ǫ→0 w ij is normalised to 1 for any value of ǫ.
Let us now prove the orthogonality statement, starting with the generalised eigenvectors of rank 1 and 2. For that, we need to compute the scalar product between the limits of w (1) ij and v j , (lim ǫ→0 v j ) † (lim ǫ→0 w (1) ij ). Instead of this limit, it is simpler to compute instead lim ǫ→0 lim ǫ ′ →ǫ (v j (ǫ)) † (w (1) ij (ǫ ′ )), which should coincide if the limit exists. As the second limit is smooth, we can prove that the limits of w (1) ij and v j are orthogonal if the vectors w (1) ij (ǫ) and v j (ǫ) are orthogonal for any ǫ. Using (4.16), we get If we now substitute the definition of β ji and use the fact that the eigenvectors are normalised, the numerator vanishes identically. Thus, the limit of the scalar product vanishes for any ǫ, including ǫ = 0.
By similar means, we can prove the orthogonality between the vectors w which vanishes identically.
The remaining scalar products are proven to vanish by induction: using the definition of w it is easy to show similarly that w  Proof. As the limits lim ǫ→0 w (n−1) ij are independent of the vectors we used to construct them, we can take one representative for each w (n) ij . From the previous proposition, the N vectors that generate this space are all non-zero and orthogonal. This proves the first part of the corollary.
By construction, the vector lim ǫ→0 w (n−1) ij is linear combination of generalised eigenvectors up of rank n. As we have only one true eigenvector, Thus, we can stablish as isomorphism between Ker{(M (0)−λI) n } and Span{lim proving the second part of the corollary.
Thus, we can claim that our method for constructing generalised eigenvectors is complete, meaning that it gives us all the generalised eigenvectors of M (0), and indicates us what their rank is.
Despite considering here the case of a single Jordan block, the results of this section hold without any modification for any matrix with several Jordan blocks as long as all the eigenvalues have geometric multiplicity one, i.e., the eigenvalues associated with each Jordan Block are pairwise distinct. In addition, it is easy to check that the dimension of each Jordan block is equal to the number of eigenvectors that collapse to the same vector.

How to compute the limit
At this point, we have laid down an algorithm to find all the generalised eigenvectors of a defective matrix by perturbing it enough to become diagonalisable and computing the limit of vanishing perturbation of linear combinations of eigenvectors. Although the expressions we have provided for these linear combinations of eigenvectors were good for proving the completeness of the method, they are not so good for actual computations. In particular, they require us to compute scalar products of eigenvectors and norms of linear combinations of eigenvectors, which usually are far from straightforward in the context of the Bethe ansatz. Here we will provide some alternative expressions for the vectors w (n) ij that are simpler for computational purposes.
The first simplification we can do concerns the norm in the denominator of (4.16). The main purpose of this norm is to provide the correct power of ǫ for the vector w (n) ij to be non-vanishing in the ǫ → 0 limit, as we are free to normalise our vectors as it suits us best. Thus, instead of considering the full norm, we are allowed to substitute it by the appropriate power of ǫ, which is much easier to compute.
Regarding the numerator of (4.16), we should recall that we required the condition lim ǫ→0 (λ 2 − ij vectors. Thus, we can choose the numerator to be a linear combination of eigenvectors with finite coefficients (with the restriction that these coefficients do not vanish when ǫ → 0) that vanishes in the ǫ → 0 limit. Nevertheless, if we want to make our computations even simpler, we might want to choose this linear combination to be orthogonal to the generalised eigenvector computed in the previous step. Notice that, if this is done at every step, this linear combination will be orthogonal to all the generalised eigenvectors computed in previous steps. We should stress again that what we get after doing these simplifications is not actually a generalised eigenvector, but a generalised eigenvector up to a linear combination of the vectors we obtained in previous steps. This is not a bad trade-off, as the algorithm computes solutions to the eigenvector problem associated with the matrix (M − λ i I) n , so it is entirely possible that the results we were obtaining were already generalised eigenvector up to a linear combination of the vectors we obtained in previous steps.
Finally, we want to stress that we have to compute the limit of a linear combination of eigenvectors of the perturbed diagonal matrix. If we compute the limit of the difference between an eigenvector of the diagonalisable matrix and a linear combination of generalised eigenvectors of the defective matrix, the algorithm might give us the wrong generalised eigenvector. This is because, in general, We can see the consequences of this inequality very clearly in the following example: Consider the matrix This matrix has the following eigenvalues and eigenvectors We can check that lim ǫ→0 v i = (1, 0, 0) for the three vectors. If we move now to w (1) we find that As we can see, if we consider the difference between an eigenvector and the limit of an eigenvector, we will wrongly assume that there are two generalised eigenvectors of rank 2 instead of just one, which contradicts theorem 3.1 because we have only one eigenvector.
Although the authors of [14] use the difference of the vector and the limit of a second vector to compute generalised eigenvectors, there is no issue with it in their context. The reason being that all their Jordan blocks are of size two, so there are not enough generalised eigenvectors to create the "misplacement" we have seen in this example.

Exceptional point with degenerate Jordan blocks
The algorithm from the previous sections works perfectly as long as the matrix we are considering is composed of Jordan blocks associated with different eigenvalues at the exceptional point. This is thanks to having exactly one generalised eigenvector of a given rank for each eigenvalue. However, the generalisation to the case where we have two or more Jordan cells associated with the same eigenvalue can give rise to issues because we lack this uniqueness.
As in the previous section, we will first examine a particular example that captures most of the peculiarities of the problem, and then discuss how to generalise the results from the previous section in depth.

A specific example: five-sites chain with three different excitations and zero total momentum
In this section, we take a look at the simplest case where the strongly twisted limit of the Hamiltonian (2.3) develops more than one Jordan block for the case where the total momentum of the excitations vanishes (that is, eigenstates of the operator U with eigenvalue 1). This corresponds to a spin chain of length 5 with two excitations of one flavour and one of the other (L = 5, M = 3, K = 1). According to [24], the Jordan normal form of the Hamiltonian has a Jordan cell of size 5 and a Jordan cell of size 1.
Although it is outside of the range of validity of the procedure we explained before, as the two Jordan blocks are indistinguishable, we will still apply it to check if we can retrieve any useful information about the Jordan structure of the Hamiltonian.
For the case of vanishing total momentum, the Hamiltonian can be written in the following form Sadly, this Hamiltonian is very difficult to diagonalise. Consequently, we decided to set some of its strictly lower diagonal elements to zero to simplify this example, as it makes the expressions for the eigenvalues and eigenvectors much simpler. In principle, we are allowed to do that because setting them to zero by hand modifies neither the position of the exceptional point nor the matrix at the exceptional point. In particular, we will focus on the following two matrices but similar results hold for other choices.
To simplify our discussion, we will denote byû i the vector (with appropriate dimension) where component i is equal to 1 and all the remaining components are equal to zero.
In addition, if we construct the following five vectors we get that they all become proportional toû 2 = (0, 1, 0, 0, 0, 0) in the limit of vanishing ǫ. If we consider now the following four vectors we find that they do not approach the same vector when ǫ approaches 0. Instead, we find that they are linear combinations of the vectorsû 3 andû 4 . However, theorem 3.1 tells us that it is impossible to obtain two generalised eigenvectors at this step because we naïvely have just one true eigenvector. We can check that indeed both vectors are generalised eigenvectors of rank 2 (up to a linear combination withû 2 ) with respect to our true eigenvector. This is only possible if they differ by a true eigenvector.
After some algebra, we find that the vector (0, The structure of the limits can be summarised by the following diagram: At the left most part we have placed the true eigenvector and, as we move one step to the right, we get the set of vectors obtained after applying once our "limit of linear combination of eigenvectors that coalesce" recipe. Although we might think that this is an indication of the existence of a Jordan block of size 5 and a Jordan block of size 1, we should not be so eager to jump to conclusions. In order to check that, we can apply the definition of generalised eigenvector (3.3). After some tedious algebra, one finds this identification to be correct . This result is in agreement with the one presented in [24]. We will later show a better method to compute the generalised eigenvectors once we know the eigenvector at the bottom of the Jordan chain.
Let us now examine the other matrix. The six eigenvectors ofH (2) take the form v 1 = (1, 0, 0, 0, 0, 0) , We can check that all these six vectors becomeû 1 when we compute the limit ǫ → 0. However, if we construct the following five vectors we get that they become proportional to a linear combination of the vectorsû 2 andû 3 −û 4 . We can check that indeed both vectors are generalised eigenvectors of rank 2 with respect to our true eigenvector.
If we continue the process, we find The results can be summarised in the following diagram: The HamiltonianH (2) gives us a very similar structure to the one we found for the HamiltonianH (1) , with the exception that the "fake generalised eigenvector of rank 3" now appears as a "fake generalised eigenvector of rank 2".
In order to see that this is not a special feature of the case we are considering, we can repeat the same construction for the case of a spin chain of length 7 with two excitations of one flavour and one of the other (L = 7, M = 3, K = 1). We will not reproduce the whole process here, but the final result can be summarised in the following diagram From this diagram we can postulate that this case has three Jordan blocks of size 9, 5 and 1 respectively.
This result again matches exactly the one collected in Table 1 in [24] and the computation from [25].

The defective limit of a diagonalisable matrix: Degenerate Jordan blocks
In the previous examples we have seen that, although we are outside the range of applicability of the method we proposed in section 4.2, it seems to capture enough of the structure of the Jordan blocks. In this section, we will explore which parts of the method remain unaltered and which parts change when several Jordan blocks share the same eigenvalues.
Let us consider again a matrix M (ǫ) that is diagonalisable for generic values of ǫ but that becomes defective for ǫ = 0. Here, we will assume that two or more Jordan cells have the same eigenvalue. First, The core reason behind this chain mixing is that there exist linear combinations of eigenvectors of M (ǫ) with coefficients that diverge at ǫ = 0 that fulfil That is, they are true eigenvectors of M (0) but they cannot be extended to true eigenvectors of M (ǫ).
Obviously Thus, at first sight we cannot distinguish if lim ǫ→0 w (3) =û 5 is a generalised eigenvector of rank 4, a generalised eigenvector of rank 2 or even a true eigenvector.
Although the situation looks a bit hopeless, there is more information in these examples that what it seems at first sight. In fact, we want to make two observations: the method to compute generalised eigenvectors is still complete (meaning that we find all the generalised eigenvectors) and Jordan chains are not broken (meaning that if we find a generalised eigenvector of rank n at a given step, its associated generalised eigenvector of rank n − 1 has to appear in the previous step). The remaining of this section is devoted to analysing these two observations and their consequences.
First of all, in spite of being outside its range of applicability, the method to compute the generalised eigenvalues we proposed in section 4.2 is complete also in this case, meaning that it gives us the full set of generalised eigenvectors of M (0). The proof of that is exactly the same in the situation of distinguishable Jordan blocks: the method give us non-zero and non-divergent orthogonal vectors, and we can apply it as many times as eigenvectors of M (ǫ) we have. In addition, it is still true that all the vectors we obtain by naïvely computing the limit ǫ → 0 of the eigenvectors of M (ǫ) are true eigenvectors of M (0).
Let us now analyse the second observation. If a "misplaced" eigenvector appears at the n-th step of our computation, we may find a generalised eigenvector of rank m at the n + m − 1-th step. The proof of this is relatively similar to the proof for the case of distinguishable Jordan block. Given the fact that there exists a linear combination of vectors fulfilling it is easy to see that a linear combination of n + m − 1 vectors will fulfil as long as we choose β i = γ αi m j=1 (λi−λn+j−1) for 1 ≤ i < n, where λ j is the eigenvalue associated with v j and γ is a constant. This means that such linear combinations contain information about generalised eigenvectors of higher rank, which can be extracted by applying the same procedure we used for the case of distinguishable Jordan blocks.
With this information, we can take another look at the issue of vectors that seemingly contradict less vectors at the n-th step than at the first step. Thus, if the dimension of the vector space we obtain at a given step is larger than the dimension of the vector space we obtained at the previous step, that is an unambiguous sign that we have found an eigenvector in disguise. In fact, we can say that if we consider the dimension of each of the vector spaces we get at each step of the process, the largest of those dimensions will be a lower bound on the number of Jordan blocks associated with the eigenvalue we are considering.
We want to end the section by showing a reliable but cumbersome procedure to separate the mixed Jordan chains. We have discussed above that, despite the chain mixing issue, our method is able to identify some true eigenvectors of M (0) beyond the shadow of a doubt. Thus, if we are able to completely reconstruct a Jordan chain from its generalised eigenvector of rank 1, we can separate the mixed chains. This reconstruction can be done if we combine the corollary 3. This issue of chain mixing was also pointed out in [25] under the name of unexpected shortening, although the authors do not provide a method to deal with it.

Computing generalised eigenvectors for the eclectic spin chain
In this section, we plan to find the generalised eigenvectors of the eclectic spin chain with a general number of excitations of type 2 and only one excitation of type 3, i.e. K = 1 and M > 1. Recall that the M = K = 1 case is diagonalisable and therefore not particularly interesting to us. Our first goal will be to construct the eigenvectors of the twisted Hamiltonian (2.3) for general values of the twist parameters q i by means of a modified version of the Nested Coordinate Bethe Ansatz (NCBA). Using these results, we will compute their behaviour at large values of the twist parameters q i and apply the algorithm presented in the previous sections to extract all the eigenvectors and generalised eigenvectors associated with the K = 1 sector of the Hamiltonian.

The Coordinate Bethe Ansatz for Twisted Spin Chains
Our first step will be to construct the eigenvectors of the twisted Hamiltonian (2.3) for the case of arbitrary length L and excitations with two different flavours. The most straightforward method to achieve this is a slightly modified version of the NCBA. The original method is described in section II.O of [28], as well as in section 2 of [29]. 4 We should first clarify that the twisted Hamiltonian (2.3) does not commute with the regular permutation operator. Nevertheless, it commutes with the shift operator U that we defined in previous sections, meaning that we can apply Bloch's theorem despite the twists. Although this allows us to assume a plane-wave ansatz, we still have to include additional factors proportional to the twist parameters q i . In particular, we will consider the following ansatz for the case of two excitations where we define the pseudo-vacuum as the tensor product of L states of type 1, i.e. |0 = ⊗ L n=1 |1 , and S n,+ k is the operator that transforms a state of type 1 in the k-th term of the tensor product into a state of type n. To simplify our notation, we will drop the dependence on the momenta and always understand momenta p 1 and p 2 unless otherwise is stated.
Substituting this ansatz into the Schrödinger equationH (q1,q2,q3) |ψ 23 = E|ψ 23 gives us the following relations E = L − 4 + 2 cos(p 1 ) + 2 cos(p 2 ) , (6.2) This S-matrix only accounts for the mixed-flavour entries of the full two-body S-matrix. If we want the full S-matrix, we have to repeat the above computations with the wavefunctions which give us the same dispersion relation and S-matrix as a regular su(2) spin chain. Thus, the complete S-matrix takes the form Finally, as we are working with a closed spin chain, we have to impose that our wavefunction is periodic. If we write the wavefunction as |ψ ij = n1<n2 |ψ ij (n 1 , n 2 ) , the periodicity condition can be expressed as |ψ ij (n 1 , n 2 ) = |ψ ij (n 2 , n 1 + L) . Substituting our ansatz for the case of two excitations, this equation takes the form These equations are usually referred to as matrix Bethe equations in the literature.
Thanks to the integrability of the theory, we can extend the matrix Bethe equations to any number where f k is the flavour of the k-th excitation. This way we get a dressing q −L 2 if f k = 3 and a dressing q L 3 if f k = 2. The presence of these additional factors prevents us to use the regular NCBA construction, but it is enough to slightly modify the prescription to make room for them.
As already mentioned, we will be focussing only on the case of K = 1. To do so, we consider the following ansatz for the wavefunction with a general number of excitations In this notation, the coefficients of (6.1) are ψ 2 (Id.) = A 23 , ψ 1 (Id.) = A 32 , ψ 1 (τ ) =Ã 32 and ψ 2 (τ ) =Ã 23 , where τ is the permutation of 1 and 2. To solve the matrix Bethe equations acting on this wavefunction we will closely follow the recipe from section II.O of [28], with the appropriate modifications due to the twist factors q i .
As we will be dealing with a wavefunction where nearly all the excitations have the same flavour, it is simpler to solve the matrix Bethe equation for an "undressed" S-matrix that has the upper-left entry equal to 1, In addition, we will be changing from the momentum variable to a rapidity variable defined by the map e ipi = ui ui+1 . 5 We do so because this variable will prove extremely useful in solving the Bethe matrix equation. In terms of these variables, the undressed S-matrix reads Let us consider now what happens when we disentangle the product of S-matrices "from inside to outside" if we focus on the coefficients of the form ψ k (Id.). As we will be considering nearly exclusively these coefficients, we will drop the argument (Id.) in all the equations that follows unless it is necessary.
Starting with the innermost operator action, i.e.
and noticing that no other factor of the string of S-matrices will affect ψ k−1 again, the eigenvector equation becomes the following algebraic equation for ψ k−1 In addition, we will define the function ψ A similar reasoning holds for the remaining coefficients ψ l , giving us the equations where we have to substitute the q L 3 factor by q −L 2 . This happens because f k = 3 and f l = 2 if l = k for the part of the wavefunction associated with the coefficient ψ k .
Regarding the other coefficients of the wavefunction, ψ k (σ) for σ different from identity, we can obtain those coefficients either by appropriately applying the S-matrix to ψ k (Id.), as is shown in the case of M = 2 in equation (6.5), or using the periodicity condition.
If we use (6.12) to eliminate the coefficients ψ .
To find the eigenvalue λ k , we have to realise that the left-hand side of the equation depends only on the difference k − l, while the right-hand side has some terms that depend only on k. The ratio can be rewritten as 6 (6.14) Thus, the only way the right-hand side of the equation depends only on the difference k − l is ifx is a constant. If we solve for the eigenvalue λ k , we get At this point, our argument slightly diverges from the one laid down in [28]. The original NCBA argument uses equation (6.14) to write a general ψ l in terms of ψ 1 and substitutes that expression in (6.12). We cannot do the same due to the twist factors q i , which makes the equation for ψ k different from the rest. Thus, we are forced to relate a general ψ l to ψ k . This gives us . (6.16) In addition, the functions ψ .
The final step of the computation is to "close" the system of equations by identifying the excitation M +1 with the excitation 1. This amounts to equating the function ψ This identification gives us a constraint to the eigenvalues λ k , which translates into a constraint forx This equation that the constantx has to satisfy is the auxiliary Bethe equation of our model.
Once we have solved the matrix Bethe equations for the undressed S-matrix, solving the matrix Bethe equations for the original S-matrix is immediate, as they reduce to the algebraic equation where λ k are the eigenvalues we have computed above. Substituting them and replacing the momenta by rapidities, the Bethe equations take the form The fact that both the Bethe equations and the auxiliary Bethe equation we obtained are exactly the same as the ones appearing in [24] when we identify ourx with their v + 1 works as a sanity check. 6 In terms of the momenta variables, the previous equation takes a more cumbersome form . This makes the reasoning in this paragraph far from transparent.

The coalescence of the Bethe states and the structure of generalised eigenvectors
After computing the eigenvectors for finite values of the twists, we can now use our algorithm to find the generalised eigenvectors of the Hamiltonian (2.3) at the exceptional point of large twists. As we have done in previous sections, we will substitute the twist parameters q i by ξi ǫ and compute the limit ǫ → 0. We will assume that chain mixing does not affect the naïve identification of Jordan chains, but we will only prove it for some specific cases. Nevertheless, we will compare our results with the results from [24] and [25] to show that they agree.
We will study in detail the particular cases of M = 2 and M = 3 before proceeding to analyse the case of arbitrary M .

Two excitations
If we want to find how each of the terms appearing in the wavefunction (6.1) behaves in the strongly twisted limit, we need to find how the momenta p 1 and p 2 and the coefficients A ij andÃ ij behave in such a limit.
In order to find how the momenta scale with ǫ, we have to study the behaviour of the Bethe equations (6.20) and the auxiliary Bethe equation (6.18) in the limit of large twist. This computation was carried out previously in [24], so we will just borrow their results. In the limit ǫ → 0 the two physical rapidities u 1 , u 2 and the auxiliary rapidityx scale as follows where α = L−3 L−1 and γ = 2L − 6, while the Bethe equations become where ξ = ξ 1 ξ 2 ξ 3 . At the level of momenta, the behaviour of the rapidities translates into This means that the plane wave factor e i(p1n1+p2n2) behaves as which depends only on the relative position of the excitations. This was not unexpected, as the state is an eigenvector of the shift operator U .
The next step is to find how the coefficients A ij scale with ǫ. As usual, we are free to normalise the wavefunction to the value that better suits us, so we will be choosing A 23 = 1 for simplicity. For the case of M = 2, equation (6.14) takes the form For the coefficients with a tilde, we can use equation (6.3) written in terms of rapidities instead of momenta Ã 23 Substituting the behaviour of the rapidities when ǫ approaches zero, we find If we put everything together, we can find that each term that appears in the wavefunction (6.1) behaves as where we should keep in mind that n 2 − n 1 ∈ {1, · · · , L − 1}, as we need to take into account the restriction 1 ≤ n 1 < n 2 ≤ L from the wavefunction. With this restriction in mind, we can see that the term associated with the coefficient A 23 always dominates over the term associated withÃ 23 (as we can check that (1 − α)(n 2 − n 1 ) < (1 + α)(n 2 − n 1 ) − α for all the values we are interested in), and the term associated with the coefficientÃ 32 always dominates over the term associated with A 32 (as, similarly, for all the values we are interested in). Thus, from now on, we will work with the wavefunction (6.1) as if the contributions associated with the coefficients A 23 and A 32 were not there.
At this point, we have enough information about the wavefunction |ψ 23 to apply our algorithm. We start by looking at how |ψ 23 behaves when ǫ approaches zero |0 .
Using that α fulfils the relation 3 − α − (1 − α)(L − 1) = 1 − α, we see that the terms that dominate take the form Using the Bethe equation (−ξ 2 u + ) L = ξ(u − − u + ), we can combine the two terms into one, giving us where the position of the operators S i,+ n should be understood modulo L. We can see that this is indeed the locked state studied in [24].
Using the results from the previous sections, we can claim that the states |ψ (1) (u − /u + ) are eigenstates of the Hamiltonian (2.3) at its exceptional point, that is, they will be eigenstates of the Hamiltonian H ξ1,ξ2,ξ3 . Notice that these states do not depend on the rapidities u + and u − separately, but on the combination u− u+ , which is equivalent to say that they depend only on the total momentum p 1 + p 2 . One might claim that this is not true due to the factor 1 ξ2u+ , but it is nothing more than a normalisation factor that we can eliminate.
The next step will be to compute the vectors w (1) ij defined in equation (4.14). As we commented in section 4.3, we take two vectors that coalesce to the same vector at the exceptional point, find a linear combination that becomes zero at the exceptional point, and then compute the limit of this linear combination divided by its norm (or, at least, by ǫ to the correct power such that it gives us a finite non-vanishing limit).
As we have just shown, two vectors |ψ 23 (p 1 , p 2 ) and |ψ 23 (p ′ 1 , p ′ 2 ) coalesce to the same eigenvector |ψ (1) (u − /u + ) if their total momentum coincide, i.e., if p 1 + p 2 = p ′ 1 + p ′ 2 . In addition, by writing them explicitly we can check that Indeed, if we study how this linear combination behaves when ǫ approaches zero, we find that the terms with n 2 = n 1 + 1 of e ip1 |ψ 23 (p 1 , p 2 ) cancel with those from e ip ′ 1 |ψ 23 (p ′ 1 , p ′ 2 ) . Therefore, the leading contribution in this linear combination takes the form Using the explicit value of α, we can check If we now use the condition that the total momentum of both states is the same, that is, , and the Bethe equations, we can rewrite this expression into a more compact form Hence, using the results from the previous sections, we can claim that the states |ψ (2) (u + /u − ) will be generalised eigenstates of rank 2 of the Hamiltonian (2.3) at its exceptional point, that is, they will be generalised eigenstates of rank 2 of the HamiltonianĤ ξ1,ξ2,ξ3 . Similarly to |ψ (1) (u − /u + ) , these states do not depend on the two momenta separately, but only thought the total momentum u− u+ . We can repeat the process again with the combination which, after some algebra, we can find that it behaves as If we continue with this process, we find that the generalised eigenvector of rank k ofĤ ξ1,ξ2,ξ3 with M = 2 and K = 1 takes the form As we see, this expression is only meaningful if k ∈ {1, · · · L − 1}. In addition, we find exactly one vector for a given value of the total momentum at any step of the process. Thus, we can claim with some certainty that the sector of M = 2 and K = 1 has Jordan chains of size L − 1 for every allowed value of the total momentum. This result is in agreement with the numerical results from [24] and the combinatorial results from [25].
We want to stress that this is only a claim at this point, as we cannot be sure that we are not mislead by chain mixing. To show that this is not the case, we will follow the procedure we described at the end of section 5.2. Consider the adjoint of the strongly twisted Hamiltonian (P l,l+1 ) † , (6.33) According to corollary 3.2.1, the generalised eigenvector of rank 1 we have computed, |ψ (1) behaves like a generalised eigenvector of maximal rank ofĤ † ξ1,ξ2,ξ3 . In fact, we can check that 7 where ρ and ρ ′ are two constants that are not important for our discussion. Notice that the generalised eigenvector of highest rank has the form of a locked state, but with the 2's to the right of the 3, instead of to its left. These states are called anti-locked states, and they were shown to be generalised eigenvectors of highest rank in [24].
As we now have the generalised eigenvector of highest rank of the Jordan chain, we can apply the definition of generalised eigenvector to compute the remaining generalised eigenvectors of the chain The issue of chain mixing in this setting was also studied in [25]. The authors conjectured that chain mixing does not affect these results for K = 1, but they were not able to provide a proof of this statement.
Nevertheless, they checked that their conjecture numerically up to L = 30 and M = 6.

Three excitations
Let us move now to the case of M = 3. The analysis follows the same structure as in the M = 2 case, but it presents some new peculiarities that are worth mentioning before tackling the sector with a general 7 Notice that not all the left generalised eigenvectors are orthogonal to the left generalised eigenvector of rank 1, . This is because the corollary 3.2.1 only requires orthogonality with respect to the eigenvector we are starting with, not with the one associated with the generalised eigenvector of highest rank we are computing.

number of excitations.
As in the previous case, we start by studying the behaviour of the Bethe equations (6.20) and the auxiliary Bethe equation (6.18) in the limit of large twist. Although this computation was carried out previously in [24], we need the next-to-leading order contribution for u 1 and u 2 , which were not computed there. In the limit ǫ → 0 the three physical rapidities u 1 , u 2 , u 3 and the auxiliary rapidityx scale as follows where ξ = ξ 1 ξ 2 ξ 3 . At the level of momenta, the behaviour of the rapidity translate into Substituting this result into the plane wave factor exp[i(p 1 n 1 + p 2 n 2 + p 3 n 3 )], we find that it behaves as Let us move now to the coefficients ψ k (σ). First, we will fix ψ 3 (Id.) = 1 and use equation (6.14) to fix ψ 1 (Id.) and ψ 2 (Id.) In addition, we can easily construct the coefficients ψ i ((jk)), with i, j, k different integers. As the permutation (jk) affects two excitations with the label 2, these excitations scatter with the usual su (2) S-matrix, and we have ψ i ((jk)) = S 22 (p j , p k )ψ i (Id.) = − 1−2e ip j +e i(p j +p k ) 1−2e ip k +e i(p j +p k ) ψ i (Id.). This is enough to fix a total of six coefficients of the wavefunction.
To find the remaining coefficients, we can use the periodicity condition to relate a term of the form , and, if we apply it again, to term of the form . This implies that, for example As we had a total of six coefficients, and we can obtain two other coefficients from a given one through these relations, this is enough to completely fix all the coefficients of the wavefunction.
Putting everything together, we can see that the coefficients that are associated with ψ 3 (Id.) are the ones that dominate with respect to the other contributions. Consider, for example, the part of the wavefunction associated with the raising operators S 2+ We should be careful, because this analysis overlooks one detail. In the ǫ → 0 limit the momenta All in all, we find a situation analogous to the M = 2 case: the contributions associated with ψ 3 (Id.) and ψ 3 ( (12)) dominate over the other four. Thus, from now on, we will work with the wavefunction as if only the contributions associated with the coefficient ψ 3 (Id.) were the only ones present.
At this point, we have enough information about the wavefunction (6.7) to apply our algorithm. We start by looking at how |ψ behaves when ǫ approaches zero We can see that the terms that dominate in this expansion are We can combine the three terms of this expression into one, giving us (6.42) where the position of the operators S i,+ n should be understood modulo L. We can see that this is indeed the locked state studied in [24]. Notice that this eigenvector depends on the individual momenta separately only through the total momenta p 1 + p 2 + p 3 .
The process of computing the generalised eigenvector of rank 2 is similar to the case of M = 2. As two states |ψ(p 1 , p 2 , p 3 ) and |ψ(p ′ 1 , p ′ 2 , p ′ 3 ) whose momenta fulfil the relation p 1 + p 2 + p 3 = p ′ 1 + p ′ 2 + p ′ 3 will coalesce to the same eigenvector of the defective matrix, the information about the generalised eigenvector of rank 2 should be codified in the linear combination e i(2p1+p2) |ψ(p 1 , . After some algebra, we can check that (6.43) The differences with the M = 2 case start to appear when we move to the generalised eigenvectors of rank 3. After some algebra, we find that the limit of the appropriate linear combination of eigenvectors of the diagonalisable Hamiltonian can be written as the linear combination of two vectors that depend only on the total momentum where a 1 and a 2 are coefficients that depend on the individual momenta, not only on the total momentum.
As we have seen in section 5.2, this is an indication that we have found another true eigenvector of the defective Hamiltonian (up to linear combinations with the other generalised eigenvectors we have found).
We find a similar or ever more complex situation as we explore linear combinations that contain information about generalised eigenvectors of higher rank. Thus, we would like to find a systematic method to know the number of independent vectors we will find at a given step. The answer to this question is actually not that difficult. If we analyse again the dependence on ǫ of each term in |ψ , we see that what is actually controlling which vectors appear at a given level is the momentum factor meaning that the only relevant contribution to how the wavefunction behaves as ǫ goes to zero depends on the separation of the excitations. Consequently, the number of independent vectors that our algorithm provides at a given step has to be equal to the number of solutions to the equation (n 3 −n 1 )+(n 3 −n 2 ) = n for the appropriate value of n. Notice that, as our positions n i take positive integer values, what we have to solve is a Diophantine equation. However, this is not enough, as we have to take into account that the wavefunction (6.7) is written with the restriction 1 ≤ n 1 < n 2 < n 3 ≤ L. This means that we have to supplement the Diophantine equation with the restrictions 1 ≤ (n 3 − n 2 ) < (n 3 − n 1 ) ≤ L − 1.
Both the Diophantine equation and the restrictions simplify significantly if we perform the change of coordinates (n 3 − n 2 ) = x 2 + 1 , (n 3 − n 1 ) = x 1 + x 2 + 2 , (6.46) giving us  Table 1: Solutions to the system of Diophantine equations (6.48) for the cases of L = 6 and L = 7. In both cases we consider M = 3 and K = 1.
We can further simplify these conditions by introducing an additional variable x 0 ≥ 0 to rewrite the last inequality as an equality. Thus, the set of equations we have to solve is After all these rewritings, our equations take the standard form of a system of linear Diophantine

equations.
As examples, we have collected the solutions for the cases of L = 6 and L = 7 in Table 1 five different values of ∆, and we get three solutions just once. Thus, we can conjecture that the Jordan chains for this case are of size one, five and seven. We can see that our results match with the ones in [24].
We will now see how to generalise these results to any length L. For that, we should notice that is a solution for length L. Therefore, we can find the number of solutions recursively by adding the solutions associated with x 0 = 0. In order to know how many solutions we have to add and how we have to add them, we have to solve the system of Diophantine equations (6.48) with x 0 = 0 In contrast with our previous system of equations, this one has exactly one solution for a given value of ∆, characterised by x 2 = ∆ + 3 − L and x 1 = 2L − 6 − ∆. However, not all these solutions are admissible due to the restrictions x i ≥ 0. This forces ∆ to be in the interval L − 3 ≤ ∆ ≤ 2L − 6. With a little bit of effort, we can prove by induction that there are a total of ⌊ ∆ 2 ⌋ + 1 solutions if 0 ≤ ∆ ≤ L − 3 and where ⌊x⌋ is the floor function, i.e., the function that gives us the largest integer less or equal to x. We will prove below that this result can be written more compactly in terms of a special function.
Once we have this result, we can easily compute the size of the Jordan chains for any length L.
The longest Jordan chain will correspond to how many times the system of equations has one or more solutions. We can see that there is a total of 2L − 6 + 1 = with the computations performed in [24] and [25].

General values of M
A similar argument can be laid down for any value of M : as we approach ǫ → 0, the momentum contribution appearing in the wavefunction behaves as  Table 2.
Following a similar logic as in the M = 3, we can see that the Jordan chains in this example have lengths 13, 9, 7, 5 and 1. This matches the results from Table 2 in [24].  Table 3. In terms of this generating function, the recurrence relation we have found can be written as which has to be supplemented with the initial condition F (L, 2, x) = L−2 j=0 x j . This condition comes from our previous result for M = 2, where we found a single Jordan cell of size L − 1 for each allowed value of the total momentum. The solution to this recurrence relation is a known family of functions called Gaussian polynomials or q-deformed binomial coefficients. In particular In fact, we can check that the rows of Table 3  This generating function was computed independently without using integrability in [25]. The fact that we used completely different methods but we obtained matching results strengthens their validity.

Properties of Gaussian polynomials and their consequences
Here we plan to analyse some properties of the Gaussian polynomials and infer some features of the Jordan cells of the eclectic spin chain from them.
First, we should give a proper definition of the Gaussian polynomials Despite its appearance, the numerator is always divisible by the denominator, so they are polynomials.
From this definition, we can check that Gaussian polynomials fulfil the following properties We can combine this result with the first Pascal identity to show that Coeff m r q ; q 0 = 1 , (6.59) where Coeff(P (q); q n ) means the coefficient accompanying q n in the polynomial P (q). In addition, we can prove that the highest non-vanishing coefficient has to be of the form Coeff m r q ; q (m−r)r = 1 . We can again follow the procedure explained in section 4.2 and show that this Jordan chain is not spoiled by chain mixing. This is quicker to prove if we set ξ 1 = ξ 2 = 0, asĤ † 0,0,ξ3 can only move the 2's in the state to the left until they find a 3. As we have a total of L − M 1's, we can applyĤ † 0,0,ξ3 a total of (M − 1)(L − M ) times, which implies that the Jordan chain has size (L − M )(M − 1) + 1. This proves the conjecture regarding chain mixing from [25] for the longest Jordan chain.
Although it takes a bit more effort, we can prove the following properties Another interesting property we can prove is that Coeff m r q ; q s = Coeff m r q ; q (m−r)r−s , (6.63) that is, the Gaussian polynomials are palindromic polynomials. This means that the size of all Jordan cells have to differ by even numbers.

Conclusions
In this article, we have presented an algorithm to construct the generalised eigenvectors associated with a defective Hamiltonian by studying how a perturbed version of that Hamiltonian that is diagonalisable approaches its exceptional point, that is, the point where it becomes non-diagonalisable. The idea behind this method is that we need to consider the limit of the linear combination of at least n eigenvectors of the diagonalisable matrix to find the generalised eigenvector of rank n of the defective matrix.
We have proven that this method is complete for the case when all the Jordan blocks of the Hamiltonian have different eigenvalues. By this statement, we mean that we are able to find all the generalised eigenvectors of the Hamiltonian, to which eigenvalue they are associated, and what their rank is. In addition, we have shown that this method can be applied to the case where there are two or more Jordan blocks that share the same eigenvalue. Although we have proven that the method also works in this case, in the sense that it retrieves all the generalised eigenvectors and tells us to which eigenvalue they are associated, there are many occasions in which the method is not able to correctly identify the rank of the eigenvector. This is due to the chain mixing effect, as some generalised eigenvectors may appear at the end of a different Jordan chain, so we may misguidedly consider Jordan chains that are longer or shorter than they really are (even up to the point where two Jordan chains appear as just one).
Thankfully, our method is capable of identifying correctly some of the generalised eigenvectors of rank 1. We have shown that this is enough information to reconstruct the full Jordan chain, allowing us to disentangle the chain mixing.
We applied this method to the case of the eclectic spin chain introduced in [23] to describe the oneloop dilatation operator of the fishnet conformal field theory. In that article and its continuation, [24], the authors described in detail the Hamiltonian of this spin chain and how the algebraic Bethe ansatz works on that setting. In particular, they studied the Bethe roots and Bethe states of this model by computing the strongly twisted limit of the Bethe roots and Bethe states constructed for the theory at finite twist. Using this approach, they were able to find the correct number of Bethe roots, but due to the coalescence of eigenstates at the exceptional point, there were not able to find the full set of Bethe states.
Using the method we presented here, that encourages us to compute limits of linear combinations of eigenvectors that coalesce to the same vector at the exceptional point, we were able to find the generalised eigenvectors of this eclectic spin chain, together with the eigenvectors that the authors of [24] knew that existed from numerical computations but could not find by computing the limit of eigenvectors. In fact, our counting of eigenstates and generalised eigenstates is in agreement with the counting they do for the particular values of the number of excitations we consider here.
The counting from [24] was later refined to any values of M and K in [25] by means of combinatorial arguments. The cornerstone of this computation is the definition of generalised eigenvector and the fact that the highest eigenstate of the longest Jordan chain is always an anti-locked state. This allows the authors to use orthogonality and combinatorial arguments to extract all the generalised eigenvectors with no knowledge of the finitely twisted spin chain. All the results we presented here are consistent with the ones they obtain for K = 1. In fact, we want to stress that they also find that the number of generalised eigenstates ofĤ (ξ1,ξ2,ξ3) can be codified in terms of the same Gaussian polynomials. As their method and our are substantially different, this matching is more than welcome.
Despite our success in finding and classifying the generalised eigenstates of the eclectic spin chain for K = 1, we have only showed that chain mixing is not present for the longest of the Jordan chains.
This is because, although we can reconstruct a Jordan chain from its lowest generalised eigenvector, the process is tedious and becomes more cumbersome as the chain gets longer (as we are reconstructing its left generalised eigenvectors one by one). This partly confirms the conjecture laid down in [25] claiming that there is no chain mixing in the eclectic spin chain.
Another interesting direction we would like to pursue is to investigate if we can adapt the method to the Algebraic Bethe Ansatz. Among other advantages, the Nested Algebraic Bethe Ansatz gives us a systematic method for constructing the eigenstates of a diagonalisable integrable system, which allows us to describe them and their properties without too much effort. Despite that, the explicit form of the wavefunctions is not immediately available, and requires us to make some additional algebraic computations that most of the time are far from easy. This is the main reason why we made use of the NCBA in this article, as our method requires us to know the wavefunctions of the diagonalisable Hamiltonian in detail.
In fact, the discussion at the end of section 5. Although we may be able to construct eigenvectors of the non-diagonalisable transfer matrix by computing the limit of eigenvectors of a diagonalisable one, we have shown that not every eigenvector can be constructed using this procedure due to the presence of chain mixing (see also [24]). This means that the limit of the B operators of the diagonalisable transfer matrix are not enough for an Algebraic Bethe Ansatz for non-diagonalisable R-matrices. Nevertheless, the above argument would work for every eigenvector regardless of its origin. Thus, the only obstacle would be to find a method to construct all the eigenvectors of the non-diagonalisable transfer matrix τ (u) from first principle.
Finally, in this article we have applied this algorithm to compute generalised eigenvectors of the eclectic spin chain, but nothing prevents us from applying it to other non-diagonalisable integrable systems. All the integrable systems with nearest-neighbour interaction and either su(2) or su(2) ⊕ su (2) symmetry were classified in [15] and [31,32,33] respectively. Some of the R-matrices that appear in these classifications are non-diagonalisable, making their transfer matrices good candidates for the algorithm presented here.