On moduli space of symmetric orthogonal matrices and exclusive Racah matrix $\bar S$ for representation $R=[3,1]$ with multiplicities

Racah matrices and higher $j$-symbols are used in description of braiding properties of conformal blocks and in construction of knot polynomials. However, in complicated cases the logic is actually inverted: they are much better deduced from these applications than from the basic representation theory. Following the recent proposal of arXiv:1612.00422, we obtain the exclusive Racah matrix $\bar S$ for the currently-front-line case of representation $R=[3,1]$ with non-trivial multiplicities, where it is actually operator valued, i.e. depends on the choice of basises in the intertwiner spaces. Effective field theory for arborescent knots in this case possesses gauge invariance, which is not yet properly described and understood. Because of this lack of knowledge a big part (about a half) of $\bar S$ needs to be reconstructed from orthogonality conditions. Therefore we discuss the abundance of symmetric orthogonal matrices, to which $\bar S$ belong, and explain that dimension of their moduli space is also about a half of that for the ordinary orthogonal matrices. Thus the knowledge approximately matches the freedom and this explains why the method can work -- with some limited addition of educated guesses. A similar calculation for $R=[r,1]$ for $r>3$ should also be doable.


Introduction
Racah matrices (also known as 6j-symbols) are a traditional topic in theoretical and mathematical physics, with a special chapter dedicated to them already in [2]. Despite a long history of research and with all available computer power, actual computation of these quantities remains among the most difficult problems, and until very recently all the non-trivial and interesting examples were out of reach. In modern theory j-symbols appear in two intimately related stories: modular transformations of conformal blocks and evaluation of physical observables in Chern-Simons theory [3] (known as Wilson loop averages or knot polynomials [4]). As often happens, these applications of Racah theory actually provide the most efficient way to calculate them. The present paper is a one more illustration of this inverse feedback from physical would-be-applications to basic mathematics: It reports a new breakthrough in Racah calculus -to series [r, 1] of representations with multiplicities, and Racah matrices are extracted from a new deep knowledge about knot polynomials -the structure of their differential expansion.
We do not go into details about knots, referring the interested reader to [1] and references therein. Instead we concentrate on the complementary part of the story, coming from the fact that one of the relevant Racah matrices, calledS, is actually a little peculiar: it is orthogonal, as any properly normalized 6j-symbol (when real-valued, in general it is unitary), but at the same time it is symmetric. Intersection of these two requirements actually restricts a matrix a lot -and this allows to reconstruct it from a fragment. A fragment is exactly what is currently known aboutS in representations R = [3,1] from knot theory -and it is of approximately the right size which is necessary for the reconstruction. Matching is not exact and, more important, not quite under control, because it is not clear how to separate the independent orthogonality constraints -but it is at least a motivation for a try. In fact, one can add intuition of another kind: Racah matrices usually depend on the quantum group/knot theory parameters q and A in a relatively nice way: many of matrix elements factorize into products/ratios of "differentials" (actually, of quantum numbers), and those which do not, deviate from factorized form only "moderately". It is highly non-trivial to get such nearly-factorized quantities satisfying non-linear orthogonality relations -and this imposes additional strong constraints, which, however, are still very difficult to formalize. In this paper we report the result of a tedious analysis, leading to a very plausible answer forS in representation [3,1]. It generalizes the celebrated result of [5] for R = [2,1], which was obtained from the first principles in a sophisticated 70-page paper, but became nearly trivial in the approach of [1]. The new formula is tested by providing polynomial expressions for HOMFLY-PT Wilson-loop averages for numerous knots, some of which (for 2-strand knots) are actually known from other sources. In principle, one can now build a second exclusive matrix S and apply the machinery of [6] to do calculations for all arborescent knots [7]. This is an important task, because the arborescent calculus of [6] is based on a very interesting effective field theory, which possesses a peculiar gauge invariance, associated with multiplicities in representation theory, and which is not yet satisfactorily formulated. One can expect that multiplicity problem does not arise to its full size for representations smaller than R = [4,2], because gauge invariance for them is actually partly broken by diagonal matrices T andT -this appeared to be the case for R = [2,1], but remains to be tested for R = [3,1]. This test is made possible by the result/conjecture of the present paper, but it is left for the future work.
In this paper we concentrate on the problem of its own: evaluation ofS [3,1] in a particular basis. We begin from reminding the notion of Racah matrices in sec.2, then discuss the moduli space of symmetric orthogonal matrices in sec.3. After that in sec.4 we briefly comment on the calculation, suggested in [1], which includes the clever choice of a basis -expressed in the form of a special ansatz for the shape ofS. The complement of the pieceS ⊂S, which was earlier found in [1], is provided in explicit form in the Appendix, the full matrix is available -together with all other currently known examples -at the site [8]. A very brief description of immediate knot theory applications is provided in sec.5.

The options for Racah calculus 2.1 Racah matrices
The product of m irreducible representations R i of a Lie algebra G (classical or quantum) can be decomposed into a linear combination of irreps: If representation Q appears at the r.h.s. with non-trivial multiplicity, then there is a space W Q of intertwiners, which is representation of the symmetry group S m . Racah matrix U describes a liner map between the spaces W (3) and it intertwines ( The labeling of U looks natural in another pictorial representation, familiar from the study of dualities: If not only W (3) at the level of triple products, but also W (2) for ordinary products is non-trivial, then matrix elements U Y Z are actually linear operators, acting from W R1,R2 Various j-symbols can be considered as the mixing matrices [9] between the R-matrices, which are the generators of the braid group B m , e.g.
then implies expression for U through R, like the eigenvalue hypothesis [10,11].

2.2
The highest weight method [12] This is the simplest straightforward approach to evaluation of j-symbols. One just explicitly describes highest weights h Q within the Verma modules (R 1 ⊗ R 2 ) ⊗ R 3 and R 1 ⊗ (R 2 ⊗ R 3 ) and then compares. For example, one can describe the fundamental representation [1] = V 0 of SL ∞ by the highest weight |0 > and the action of simple roots (for the sake of brevity we omit the group-dependent coefficients, which can be easily restored). Then [10] is a combination of two representations with highest weights |0 ⊗ |0 and |1 ⊗ |0 − q|0 ⊗ |1 . At the next stage schematically and where ( ) and [ ] denote q-symmetrization and q-antisymmetrization. Clearly the underlined highest weights in the two cases are different and Racah matrix relates them (properly normalized) Unfortunately, complexity of calculations rapidly grows with the size of representations. Situation can be improved by more advanced description of highest weights, say, by (q-deformed) Vandermonde products [12] and Gelfand-Zeitlin labeling [13], -but only partly. Currently, the top achievements on this way is evaluation of inclusive Racah matrices for representations up to R = [4, 2].

Conformal block monodromies [14] and exclusive matricesS, S
A potentially competitive method uses advances in the theory of conformal blocks. Since they can be represented by (appropriately defined) Dotsenko-Fateev integrals/sums [15] and thus belong to a class of q-hypergeometric functions, their modular properties, which are controlled by the j-symbols, should be comprehensible. Advantage of this approach is a relatively simple dependence of vertex operators on representation, what gives a chance to get formulas for entire classes of representations at once. For an example of this kind for q = 1 (i.e. for the central charge c = ∞, when multiple integrals are not always needed [16,17]) see [18].
The simplest of all are the 4-point conformal blocks with two vertices in representation R and two in the conjugate representationR. The corresponding 6j-symbols are now called exclusive Racah matricesS and S: and They are difficult to calculate by the highest weight method, because highest weights of the conjugate representations depend strongly on the choice of the group SL N -therefore one needs to calculate for different values of N and then analytically continue. Instead these matrices can be looked for by the evolution method in knot theory [19,20]. This paper describes a new achievement of this approach -for representations R = [r, 1] where multiplicities begin to matter. We immediately reproduce in this way the difficult result of [5] for R = [2,1] and conjecture the answer for R = [3,1]. This adds to the previously known cases of arbitrary symmetric representation R = [r] in [21,22] and rectangular representations R = [r s ] in [20,23] (in the latter case actually tabulated are Racah matrices for the two-line R = [rr] with r ≤ 5, see [8]). Formulas for transposed representations (say, antisymmetric or two-column) are obtained by the change q −→ −q −1 [24].
3 The abundance of matricesS and S

Yang-Baxter relation
Because of the Yang-Baxter relation the matrices S andS are not independent. If we denote the diagonalized R matrices in the channels R ⊗ R and R ⊗R by T andT respectively, then Moreover, by its definitionS is a symmetric orthogonal matrix, thus (7) defines S as the diagonalizing matrix of symmetric (but no longer orthogonal) T −1ST −1 , i.e. S definesS and vice versa -for given diagonal T andT with no degenerate eigenvalues. If T =T and S =S, then (7) becomes a non-trivial quadratic relation, presumably leading to the eigenvalue hypothesis [10,11]. Degeneration of eigenvalues of T andT signals about non-trivial multiplicities, though the situation is somewhat more involved: there can be "accidental" degeneracies, unrelated to multiplicities (at least in an obvious way) and conversely, there can be multiplicities, but no degeneracies (eigenvalues can still differ by a sign) -both phenomena will show up in the discussion or representations R = [r, 1] in this paper.

Moduli space of symmetric orthogonal matrices
For ordinary orthogonal matrices of the size N × N one usually imposes N (N +1) 2 orthonormality constraints on N 2 elements, and if the constraints are all independent this leaves 2 free parameters. The simplest way to justify this is just to note that for any antisymmetric matrix exp(antisymmetric) = orthogonal.
However, such exponentiation will never produce a symmetric matrix (with the only exception of unity), i.e. symmetric orthogonal matrices do not possess exponential realization. Already for N = 2 they have a form σ 3 ·e iασ2 rather than e iασ2 -and this example is enough to demonstrate that now of N (N +1) 2 orthogonality constraints on elements are not always independent. If they were, there would be no free parameters (moduli) at all, but in fact the set of symmetric orthogonal matrices, to whichS belongs, is just small.

Eigenvalues and signature ofS
Racah matrices, needed in knot theory, are functions of parameters q and A = q N , which can be arbitrary complex numbers. However, since the final quantities made out of them are Laurent polynomials, one can easily continue from the domain where the matrix in a particular representation R is real-valued (for this one should just keep A and q real and |A| > |q| ±|R| ). Real valued symmetric matrixS can be diagonalized by conjugation with orthogonal matrix and has real eigenvalues. SinceS is at the same time orthogonal, these eigenvalues can be only ±1. Naturally the spaces of such matrices are classified by their signatures -the difference between the numbers of eigenvalues +1 and −1, and dimension of the moduli space of symmetric orthogonal matrices depends on the signature. If all eigenvalues are the same, there are no moduli: orthogonal conjugate of unit matrix is unit matrix itself.
Since eigenvalues do not depend on q, they can be evaluated at q = 1, when diagonalT is also made from ±1 and the eigenvalues ofS merge with those ofT −1ST −1 , which, according to (7), are just the elements of diagonal T . This means that for every Racah matrixS we actually know its signature -it coincides with the signature of T . For example, for all symmetric representations R = [r] the eigenvalues ofS are just an alternating sequence +1, −1, +1, −1, . . . thus signature is 0 and 1 for even and odd N = r + 1 respectively, while signature −1 does not appear.

The elementary cases of N = 3 and N = 2
For example, for N = 3 the condition implies and what leaves a 2-parametric set, which for N = 2 (c = 0, d 3 = 1) reduces to a 1-parametric cos θ sin θ sin θ − cos θ (11) (note that this is a rotation, complemented by a reflection, and determinant of the matrix is −1 rather than 1). One can instead express the entries of the symmetric orthogonal matrix through those in the first line, satisfying a 2 + b 2 + d 2 1 = 1: The sign ambiguity is essential for our purposes: only one of the two branches (the one with 1 − d 1 in denominators) reproduces the right expression [19] for Racah matrixS [2] , Technically this is related to factorization identity N (N + 1) − 2 = (N − 1)(N + 2), which has no analogue for N (N + 1) + 2. The true reason is that different branches provide matrices with two different signatures: +1 and −1, and only the former is the right one for Racah matrixS [2] . Thus it is not a surprise that Racah matrixS [1] which has signature 0 is of the form (11) without any reservations: Here and further in the text we use the standard notation:

The case of generic N
For higher N the out-of-diagonal constraints look like P ij =s ij s ii +s jj + k =i,js while the diagonal constraints are and it is not immediately clear which of them are actually independent. As we shall see in this subsection, the answer is indeed far from obvious. The dimension of moduli space is equal to corank of the N (N +1) 2 × N (N +1) 2 matrix ∂Pij ∂s kl , i.e. to the number of its vanishing eigenvalues -at a point where all P ij = 0. One can easily measure these eigenvalues at symmetric representations R = [r], where the symmetric orthogonal matrixS of signature parity(r + 1) = parity(N ) is explicitly known from [22]. The eigenvalues are ±2 and 0 with the multiplicities

Conjecture about the moduli of symmetric orthogonal matrices
This gives certain support to the following conjecture: the dimension of moduli space for N × N symmetric orthogonal matrices with the signature parity(N ) is i.e. for large N about a quarter of the elements ofS are not fixed by orthogonality constraints -twice less than for the ordinary orthogonal matrices. Still this freedom is quite big. It means that we should know at least N (N +1) 2 − D N elements of the matrixS to have a chance of restoring the rest from orthogonality constraints, as suggested in [1].
Of course, there is no immediate way to solve a set of quadratic equations (unless the advanced methods of non-linear algebra [25] are used, requiring the explicit knowledge of the relevant resultants). The knowledge of a part of the matrix allows to considerably simplify this problem -as explained in [1] it actually reduces to a system of linear equations for R = [2,1]. In the next section we comment on bigger representations -there things are not so simple. Still, we get through to the final answer at least in the case of R = [3,1].

Racah matrix from [1]
Discovered in [1] was the shape of the differential expansion [21,19,26,23] for colored HOMFLY-PT polynomials of the antiparallel-double-braid knots (a certain 2-parametric generalization of twist knots) in representations R = [3,1]. After this structure is revealed, one knows the polynomials themselves and from them one can easily read a piece of Racah matrixS. It is actually entireS for the multiplicity-free rectangular representations R = [r s ], but for the non-rectangular ones, beginning from R = [r, 1], this is indeed a piece, moreover, a relatively small one. Namely. extracted is a 3r × 3r sub-matrixS ofS [r,1] , which has the size (7r − 4) × (7r − 4) -and comparison with (20) shows that this is far below the need: 3r(3r+1) 2 ∼ 9 2 r 2 is parametrically much less than the half of (7r−4)(7r−3) 2 ∼ 49 2 r 2 . To cure this problem it was suggested in [1] to make an educated guess and look forS in the special form, consistent with the empirical properties of the embeddingS −→S: This is a significant improvement: undetermined now are the 3(r − 1) 2 parameters x, y, Y , 2r(r − 1) parameters u, v and 2(r − 1)(4r − 3) parameters z, i.e. a total of (13r − 9)(r − 1), what only slightly exceeds D 7r−4 ∼ 49 4 r 2 . Moreover, some r(r−1) 2 combinations of z are also expressible through S -thus, if all orthogonality constraints were independent for this ansatz they would be enough.
For small r such estimates are even more optimistic. For r = 1 there is no freedom left -and indeed, this is a simple case, R = [1, 1] is equivalent to R = [2], it is sufficient to switch q −→ −q −1 in all the formulas.
For r = 2 (R = [2, 1]) there are 17 or even 16 free parameters (the sum of all z is a known element ofS), what is considerably smaller than D 10 = 25 -and indeed, as explained in [1], in this case orthogonality constraints are sufficient to restore the matrixS from S and the above ansatz. This reproduces rather easily the result of [5], obtained by a complicated first-principle calculation. In fact, with minor additional guesses, coming from the desire to have factorization into quantum numbers, in this case it is sufficient to solve only linear equations, what makes the calculation really simple.
For r = 3 the number of free parameters is 60 or 57, while the conjecture (20) gives D 17 = 72 -thus there are also chances for success. This time calculations require to use essentially quadratic orthogonality equations and is pretty tedious. In result we obtained a one-parametric family of symmetric orthogonal matrices, with the modulus, parameterized by the angle θ, which enters only the four Y -parameters. Thus, orthogonality constraints are not sufficient in this case, even with a restrictive ansatz and with certain factorization guesses. This angle, however, can be fixed from additional requirement -that the eigenvalues ofT −1ST −1 are given by T . Moreover, after the angle is found in this way, factorization of the matrix elementsS significantly improves -what means that the right value could be also guessed from factorization studies, if more effort was made.
See Appendix for the list of parameters inS [31] , which remained undetermined in [1].

Torus test and new knot polynomials
Available test of these formulas is provided by evaluation of the 2-strand torus knots, which can be represented as 2-bridge knots, expressible only throughS andT matrices: Torus(2, 2k + 1) = Finger 2, . . . , 2

2k times
Pictorially this looks as where boxes contain 2m twists of the two lines: Expression for HOMFLY-PT polynomial, obtained by the standard rules of [6], is Bar over "finger" reminds that it involves only crossings of anti-parallel lines. Since knot polynomials for torus knots are known for arbitrary representation R from the Rosso-Jones formula [27,24], one can make a comparison -and it is indeed successful. Every knot in the table has many realizations of this kind, we include only the simplest one. Underlined are the knots, describable by only two non-vanishing parameters m 1 and m 2 -these are double braids, which possess a remarkable factorization of differential-expansion coefficients into those for twist knots (double-unerlined), and were the source of knowledge about the sub-matrixS from [1]. Sensitive to the other elements ofS (though not to the angle θ, see the Appendix) are non-underlined knots. For testing our formulas the torus knots were used -the simplest of them are present in the table and marked by boxes. Omitted are the knots which are not 2-bridge, i.e. not representable as single fingers with both types of crossings allowed, parallel and antiparallel. The ones which are not yet(?) identified as single antiparallel fingers are labeled by question marks. The next immediate things to do are extraction of the matrix S from (7) and development of arborescent calculus a la [6] for representation R = [3,1]. MatrixS [3,1] is symmetric and orthogonal for arbitrary value of parameter θ. Moreover, θ does not contribute to expressions for 2-strand torus knots and other single-finger knots, considered in sec.5. However, it affects the eigenvalues ofT −1ST −1 and is fixed by comparison with the entries of T .
The z-constituents ofS are listed in an order, which reflects their hidden symmetry: S 10,10 =