Kerov functions for composite representations and Macdonald ideal

Kerov functions provide an infinite-parametric deformation of the set of Schur functions, which is a far-going generalization of the 2-parametric Macdonald deformation. In this paper, we concentrate on a particular subject: on Kerov functions labeled by the Young diagrams associated with the conjugate and, more generally, composite representations. Our description highlights peculiarities of the Macdonald locus (ideal) in the space of the Kerov parameters, where some formulas and relations get drastically simplified. However, even in this case, they substantially deviate from the Schur case, which illustrates the problems encountered in the theory of link hyperpolynomials. An important additional feature of the Macdonald case is uniformization, a possibility of capturing the dependence on $N$ for symmetric polynomials of $N$ variables into a single variable $A=t^N$, while in the generic Kerov case the $N$-dependence looks considerably more involved.


Introduction
In this paper, we continue our study of theory of Kerov functions 1 [2][3][4] and their applications. We give a brief summary of the issues already reviewed in [6] and proceed to the very important set of questions related to the Kerov functions labeled by the Young diagrams associated with the conjugate and, more generally, composite representations. Though, in the Kerov case, literal representation theory does not work, for the sake of simplicity, we refer to these Kerov functions as associated with composite representation. They are important in all applications, especially in the theory of topological vertices. As we shall see, simplicity of this story in the non-refined case, i.e. for the Schur polynomials, was actually due to a very simple conjugation rule Schur (0,P ) [X] := SchurP [X] = det X P1 · Schur P [X −1 ] (1) which has a natural lifting to composite representations in the form of the Koike formula [7][8][9], Here X is the N × N matrix defining the N -dimensional Miwa locus in the space of ordinary time variables p k = tr X k , R and P are the Young diagrams, P 1 ≥ P 2 ≥ . . . ≥ P l P ≥ 0 and similarly for R, andP is the conjugate of P . The Schur polynomials are symmetric functions of the eigenvalues x a of the matrix X [10,11]. Our goal is to describe a Kerov version of these formulas, and the result is that the sum at the r.h.s. of (2) extends to a double sum over all the subdiagrams, and generically this is true not only for the Kerov functions, but also for the Macdonald polynomials [10], though the formulas in the latter case are much simpler. These extra terms cause problems for the refined version of [12], i.e. for constructing non-torus hyperpolynomials for the simplest link L 8n8 from conventional refined topological vertex of [13].
More than that, even the Kerov counterpart of (1) acquires a full-fledged sum at the r.h.s. over all the Young diagrams of the size (number of boxes) |R|. In variance with (2), this effect disappears on the Macdonald locus, and this is an example of the situation, where the Macdonald polynomials are simpler than the generic Kerov ones. However, after a minor change of the question: how to get from (1) to (2) the simplicity disappears, at least partly. Of course, simpler-looking are also particular coefficients in the Macdonald deformation of (2). However, this may depend on the choice of notation: it is not very clear what kind of properties of these coefficients is truly simplified.
We describe only one of such properties in addition to above-mentioned nullification of some coefficients, the uniformization [14], i.e. a possibility of capturing the N -dependence into a single parameter A, on which the coefficients depend rationally like on all other parameters. As explained in [6], the Kerov functions can be considered as depending on two sets of variables, which in many respects are similar: Ker The Macdonald locus is given by the relations g k = {q k } {t k } , where, hereafter, we use the notation {x} := x − x −1 , and plays for the g-variables the same role as the topological locus p k = {a k } {t k } does for the p-variables, thus one can expect various factorizations to occur, and this is what really happens and leads to uniformization of the coefficients. However, an explicit description of p − g duality is still out of reach, and a systematic conversion of p-properties into g-properties is not yet available.
Of course, all this requires re-thinking and insights, we just make a first step in this direction and do not pretend to put the right accents on various observations we make. We begin in sec.2 with reminding different issues relevant for our discussion. The two auxiliary sections 3 and 4 are devoted to the two important aspects of the generalizations: to denominator functions and to description of Macdonald locus as an ideal in the space of g-polynomials. Then sec.5 and 6 discuss Kerov and Macdonald generalizations of the Koike formulas (1) and (2). All these results, summarized in sec.7 are largely speculative but already challenging. They call for a deep study and understanding.

Definitions
Kerov functions are symmetric functions (polynomials) of variables x a (eigenvalues of a matrix X), or, equivalently, of time variables p k := a x k a = tr X k , hence they are labeled by Young diagrams, the degree of the polynomial being size of the Young diagram |R|. They are also rational functions of the infinite set of parameters g k and depend on the first |R| of them.
Following [2] and [6], we define a pair of Kerov functions as triangular combinations of Schur functions in two different orderings of Young diagrams, Here we denote R ∨ the transposition of the Young diagram R, the sign < refers to the lexicographical ordering, R > R ′ if r 1 > r ′ 1 or if r 1 = r ′ 1 , but r 2 > r ′ 2 , or if r 1 = r ′ 1 and r 2 = r ′ 2 , but r 3 > r ′ 3 , and so on (4) and the two summation rules in (3) (3) are defined iteratively in R and R ′ from the orthogonality conditions w.r.t. the scalar product i.e. from the Gauss decomposition of the matrix or any of its powers, positive or negative, Here ψ R (∆) are the characters of symmetric group S |R| and with the Young diagram ∆ = δ 1 ≥ δ 2 ≥ . . . ≥ δ l∆ , one associates a monomial p ∆ = l∆ i=1 p δi . The variables g k parameterize the measure that defines the Kerov functions. The combinatorial factor z ∆ is best defined in the dual parametrization of the Young diagram, ∆ = . . . , 2 m2 , 1 m1 , then z ∆ = k k m k · m k !. Note that the normalization of Ker (g) is already fixed by the choice of unit diagonal coefficient (the first term) in (3), K (g) RR = 1, so that the norm ||Ker (g) || is a deducible quantity.
We refer to [6] for a comprehensive collection of properties of Kerov functions (i.e. to the Kerov lifting of all the relevant properties of the Schur polynomials). In the rest of this subsection, we mention some of them in the form suited for the discussion in the present paper.

Denominator functions
In Kerov calculus, one associate with each Young diagram R the four numbers: the sequence number of R in the lexicographic ordering, ν ′ R = the sequence number of R from the end of lexicographic ordering, ν R = the sequence number of R in the transposed lexicographic ordering, ν ′ R = the sequence number of R from the end of the transposed lexicographic ordering (9) i.e. ν ′ is the number of partitions at level n. Since the Kerov functions are rational functions of g k , the first special functions in Kerov calculus are their denominators: |R| . The shape of these functions is actually controlled by the numbers ν (see sec.3.1 for an explicit example): Of course,∆ R differs from ∆ R only when ν R =ν R . Denominator functions are positive integer polynomials of g k (modulo simple factorial multipliers), the first of them are: As is clear from these examples, the structure of numerators is similar to the corresponding ∆, but they depend also on p-variables and thus on the shape of R in order to reproduce the Schur functions Schur R {p} when all g k = 1. At level 8 the degree is given only for the first set of Kerov functions. Note that these denominator polynomials have these degrees only upon most of g k independent: choosing just a few g k = ξ k (for an arbitrary xi) makes degree of the denominator polynomial lower. E.g. the denominator of Ker (g) [1,1,2,3] reaches the degree 30 only at most g 2 = g 2 1 . Adding, say, g 3 = g 3 1 decrease the degree of the denominator. This illustrates a strong correlation between numerators and denominators of the Kerov functions.

Transposition rule
It is a straightforward generalization of the transposition rule for the Schur functions: Schur R ∨ {p} = (−) |R| · Schur R {−p}, but involves inversion of the parameters g k and a switch between the two functions in (3): In other words, it relates Kerov functions with ν R = ν ′ R . Remarkably, formulas (10) in s.2.1.2 imply the existence of a far more interesting relation between the functions with ν ′ R = ν R + 1 , which still remains to be brought to a simple form like (13).

Skew Schur functions and multiplication rule
Like the skew Schur polynomials, the skew Kerov functions Ker They can be obtained by linear combination from the ordinary Kerov functions: (15) where N are the g-dependent Kerov Littlewood-Richardson coefficients, i.e. the structure constants in the multiplication rule of the second set of Kerov functions and the sums are rather restricted, since the Kerov Littlewood-Richardson coefficients N R ′′ R,R ′ (g) are non-zero only in between the partitions are decomposed with N themselves:

Cauchy summation formula
The Cauchy summation formula remains true for arbitrary Kerov functions and, more generally, where the sum over σ at the r.h.s. contains only finitely many terms.

Peculiarities of the Macdonald case
If one chooses g k restricted to be g k = {q k } {t k } , the Kerov functions reduces to the Macdonald polynomials, which enjoy a series of peculiar properties. The basic ones are: (i) the Macdonald-Kostka coefficients between the Young diagrams with different sequence numbers in two orderings in (3) is equal to zero; (ii) some generically non-zero structure constants vanish in the Macdonald case; (iii) uniformization.
Property (i) means that the Macdonald polynomials do not depend on choosing the ordering in (3), i.e. the two sets of Kerov functions coincide on the Macdonald locus g k = {q k } {t k } . In fact, one can prove that they coincide if and only if g k 's are restricted to the Macdonald locus.
Property (ii) can be important for precise correspondence with representation theory of finite-dimensional Lie algebras: for example, the summation domain in multiplication rule (16) is additionally limited by the decomposition rule of the tensor product of representations of group SL(N ): R ⊗ R ′ = ⊕R ′′ only on the Macdonald locus, beyond it all the Kerov Littlewood-Richardson coefficients N in (16) are non-vanishing. One can again prove that this restriction to the decomposition rule for representations emerges if and only if g k are put on the Macdonald locus.
The third property (iii), uniformization is a possibility of capturing the dependence of polynomials in conjugate, and, more generally, in composite representations of SL(N ) into a single parameter A, on which the coefficients depend rationally like on all other parameters. We will discuss property (iii) in detail in sec.6.
In Kerov theory, a useful look at the Macdonald choice of parameters g k is to notice its close relation to a very different subject, the topological locus for p-variables. Among other things, and together with them this sheds some light on the factorizations and other apparent simplifications of many formulas in the Macdonald case. This also shows the way to understand what happens away of it.

Topological locus and Macdonald locus
Since these two loci are going to play a special in this paper, we remind them once again. Topological locus (TL) is a specialization of the ordinary time variables and it is the two-dimensional variety in the entire infinite dimensional space P = {p k , k = 1, . . . , ∞} of time variables where the Schur functions factorize: where h i,j is length of the hook (i, j). For A = t N , this is a well-known hook formula for the quantum dimensions D R of representation R of U t (SL(N )). In knot theory, the quantum dimensions are interpreted as values of the unreduced colored Hopf polynomial for the unknot. Macdonald locus (ML) is actually the same specialization, but in the space G of Kerov variables g k : with q playing the role of A. Accordingly, the g-dependent Schur functions, from which the Kostka coefficients and other ingredients of Kerov functions are made, factorize on this locus: The Kerov functions depend on two sets of time variables, Ker and when g k are restricted to the Macdonald locus, they turn into the Macdonald functions of p k only, which explicitly depend on q and t: An additional non-trivial fact is that, like the Schur polynomials, these are also factorized on the topological locus, provided t is the same in (21) and (23), i.e. the Kerov functions factorize also on the intersection of these two loci: a 3-dimensional variety in P ⊗ G which we call Macdonald topological locus (MTL): These quantities explicitly described by the above hook formula are often called Macdonald dimensions, and they provide expressions for the unreduced colored hyper-polynomials for the unknot. After a peculiar change of variables A = a √ −t, q = −qt, t = q, which changes sign from minus to plus in the differences ("differentials") {Aq m t n }, they are interpreted as (unreduced) super-polynomials for the unknot.  (21) is a 2-dimensional sub-variety of an N -dimensional Miwa locus p X k = tr X k = N a x k a , on which the Schur, Macdonald and Kerov functions are usually studied in the theory of symmetric functions, their restrictions to Miwa locus are then denoted through It plays a very important role in the present paper, because it is the place where these functions are naturally defined in N -dependent conjugate and composite representations. Surviving on Miwa locus are only the Schur functions Schur R [X] with l R ≤ N . The same remains true for the Macdonald polynomials. However, this is not always true for the first set of Kerov functions: for example, The reason for this is that the lexicographic ordering does not imply that l R ′ ≥ l R for R ′ < R, and, therefore, (3) for Ker R with l R = 3 can include contributions from Schur R ′ with l R ′ = 2 and, in result, does not vanish on the Miwa locus for N = 2.
At the same time, the second ordering implies that l R ′ ≥ l R for R ′ < R, and Ker R = 0 whenever N < l R for the second set of Kerov functions because of the corresponding property of the Schur functions.

Diagram-dependent deformation of topological locus
It is sometimes convenient to use a different view on the Miwa locus and a different way to way to introduce it: the discussion of the previous subsection implies a natural association of the Miwa variables with lines of a Young diagram [9]. More concretely, we define the deformation of topological locus by a Young diagram and In fact, it can be further promoted to and since v i do not need to be made from exponentials of the ordered integers q si . Lifting (29) to composite representations S −→ (R, P ) is also straightforward: The factors in all these definitions correspond to using the Miwa variable formalism with The meaning of this deformed topological locus is not that immediate, however, it is the central ingredient of the character realization for the Hopf HOMFLY-PT polynomial [15]: the topological peculiarity of the Hopf link [9,16] implies that and p * S at t = q is the argument of this Schur polynomial, Then (32) appears in the description of composite representations (R, P ). See [9] for detailed discussion and references. Surprisingly or not, the (q, t)-deformation of deformed locus is straightforward.

Composite representations
The composite representation is the most general finite-dimensional irreducible highest weight representations of SL(N ) [7,17,19], which are associated with the N -dependent Young diagram The ordinary N -independent representations in this notation are R = (R, ∅), there conjugate areR = (∅, R). The simplest of non-trivial composites is the adjoint (1, 1) = [2, 1 N −2 ]. Vogel's universality [20], providing a unified description of representation theory of all simple Lie algebras at once (as well as of something else) is applicable precisely to the adjoint and its descendants (the "E 8 -sector"), i.e. is one of the many topics requiring knowledge of the composites. In knot theory, the composite representations appear in the study of counter-strand braids, which are one of the most convenient building blocks in the tangle calculus [21].
The basic special function associated with representation is its character expressed through the Schur functions Schur (R,∅) {p} = Schur R {p}. For composite representations, one needs their generalization, composite Schur functions [9]. Because of explicit N dependence, they are not easy to define for arbitrary (generic) values of time-variable p k . Fortunately, in most applications we need their reductions to just N -dimensional loci p * V (of which the simplest one is the topological locus p * k = {A k } {t k } , widely used in knot theory since [22]). At these peculiar loci, the composite Schur functions can be defined by the uniformization trick of [9], and they possess a nice description as a bilinear combination of the skew Schur functions or, in more detail, This formula is due to K. Koike [7], see sec.6.
In the case of considering the Macdonald polynomials instead of the Schur functions, the uniformization still works, but it provides non-trivial expressions with additional poles in A in denominators, e.g. already for the adjoin at the topological locus one gets M * . Even worse, the Koike formula is not immediately deformed: a bilinear decomposition into the skew Macdonald polynomials survives only in the large-A limit, but even then the single sum in (36) restricted to η 2 = η ∨ 1 is lifted to a double sum with the only restriction on sizes |η 2 | = |η 1 |.

Example of (10)
In this section, we begin from an explicit example of what (10) means and how it works at a reasonably representative, but simple (and still not fully generic) level |R| = 7. The general claim of (10) is that The labeling table in the case of level 7 is and the denominator functions satisfy: The proportionality signs appear because we omit monomial factors (powers of g k ) appearing in the ginversion of a polynomial. Note that, in our notation, say, ∆ [2,2,2,1] = ∆ 7 . These ordering/notational details are essential for validity of (10): it implies that Actually, at level |R| = 7 there are a few accidental (?) coincidences, i.e. additional accidental relations between the denominator functions. They follow by (10) from the coincidence between the two Kerov functions (3) in three cases: R = [3, 2, 1, 1], [4, 2, 1] and [5,2], when ν R =ν R (in addition to five generic coincidences for [7], [6,2]

Denominator functions
The next addition to (11) helps to reveal the structure of denominators ∆ (m) r . In the obvious abbreviated notation,

Macdonald locus as an ideal in the space of g-polynomials
We now discuss the Macdonald ideal in the ring of all polynomials of the variables g k , i.e. those which vanish on the Macdonald locus. We first consider the ideal in the ring of all polynomials, and then concentrate on the ideal in the sub-ring of all homogeneous polynomials, since it is homogeneous polynomials that emerge within the Kerov polynomial context.

Inhomogeneous polynomials
If we parameterize the Macdonald ideal through trigonometric functions, g r = sin(ar) sin(br) , then all such g r can be easily expressed through g 1 and g 2 . To get these expressions, it is enough to (i) represent sin(ar) and sin(br) as polynomials of sin(a) and sin(b) respectively, (ii) substitute sin(a) = g 1 sin(b), cos(a) = g2 g1 cos(b) and, finally, Now one can use de Moivre's formula Then, for even r = 2n, where Similarly, for even r = 2n + 1, where The arguments of the two polynomials in the numerator and denominator are related by the simultaneous inversion of the two independent variables: In result, the ideal relations are invariant under the inversion of all g r −→ g −1 r . The simplest examples are: Now we turn to the Macdonald ideal in the sub-ring of homogeneous polynomials. We will use the following notation: V (m) r is a homogeneous (rightly graded) polynomial of g 1 , . . . , g r of degree m, which vanishes at the Macdonald locus; V (n) r is a similar polynomial, but depending only on g r and g 1 , g 2 , g 3 . It is clear that, at given r, the minimal possible n not smaller the minimal possible m.

Phenomenology
We start with a low degree examples in order to get feeling of the general structures. Let us proceed degree by degree.
• There are no functions of g 1 , g 2 , g 3 only, which vanish on ML.

Composite Kerov and Macdonald functions
with the functions p * V k (A, g k ), p * V k (A, g k ) yet to be defined. According to this definition, the uniform Kerov function may explicitly depend on N , and, indeed, it is a non-trivial and even non-polynomial function of In this section, we go through particular examples on increasing complexity with the goal to illustrate the structure of at least the l.h.s. of (71), already this being a non-trivial task.

Conjugate representationsS = (∅, S)
The simplest under conjugation of the Young diagram is the behaviour of the Schur functions at the Miwa locus: what is transformed is the locus itself, where X := det X = n a=1 x a . Already in this example it is clear that a uniform Ker (g) S {p k } will not be easy to define, because X −1 has no clear relation to p k on the Miwa locus (traces are not consistent with inversion).
The "U (1)-factor" X l S ∨ in (72) can be eliminated by restriction from GL(N ) to SL(N ), i.e. by further restricting the Miwa locus to X = det X = 1. It is, however, useful to keep this factor, because it sheds additional light on the structure of the Kerov deformation. The power of X is defined by the sum of sizes of S andS: the former is |S| but the latter depends on N and the number l S ∨ of lines in transposed S, as clear from the picture in sec.2.3: |S| = N · l S ∨ − |S|. Taking the sum instead of the difference is explained by inversion of X at the r.h.s. of (72).
Already the deformation of (72) is non-trivial: the r.h.s. contains several terms all with the same power of X , the sum runs over the diagrams Q of the size |S| (what is denoted by ⊢). This structure will be further inherited by formulas for generic composite representations: after the Kerov deformation, the Koike formula (36) acquires new terms as compared to the Schur case.
In the case of antisymmetric representations S = [1 s ] with S ∨ = [s] and l S ∨ = 1, there is just a single new term at the r.h.s., and (72) remains un-deformed: In what follows we denoteX := X −1 to simplify the formulas. If appears, g −1 means inversion of all g k −→ g −1 k .

Adjoint representation adj = ([1], [1])
Adjoint is the simplest of composite representations, it is described by the Young diagram [2, 1 N −2 ]. In this simplest case, Denominator here comes from Ker but a somewhat non-trivial N -dependence emerges, which is not easy to express through the uniform parameter A. However, this can be easily done on the Macdonald locus, where

Conjugation of higher symmetric representation
of grading 30, which coincides with the denominator of Ker [3,3] .
Note that one can solve, say, the condition B [2], [1,1] = 0 in order to determine g 4 as a function of (g 1 , g 2 , g 3 ), then solve the two similar conditions B [3], [1,2] = 0 and B [3],[1,1,1] = 0 to determine g 5 , g 6 as functions of (g 1 , g 2 , g 3 ), etc. Thus, one obtains all higher g k 's as functions of three arbitrary parameters (g 1 , g 2 , g 3 ). It turns out that solving these conditions, one unambiguously led to the Macdonald polynomials with the parameters q and t obtained from the equations All other g k are then The parameter g 1 remains unfixed, since the transformation of measure g k → ξ k g k with arbitrary ξ does not change the symmetric polynomials. Thus, the requirement of absence additional structure constants is equivalent to the Macdonald ideal.
In the case of Macdonald functions, the situation gets more involved (see also [23]). There are three basic modifications of (96): (i) The skew Macdonald polynomials instead of the skew Schur functions are expected to be sufficient only in the limit of N −→ ∞, which is interpreted as A = t N −→ 0 at |t| < 1. This limit coincides with the limit of A −→ ∞.
(ii) The sum turns into a double sum over arbitrary diagrams η 1 and η 2 of equal sizes, but without the requirement η 2 = η ∨ 1 . (iii) In the sum, there emerge non-unit coefficients that are functions of q and t. Those in front of the items with η 2 = η ∨ 1 are suppressed by the factor {q/t} (in fact, it is a more interesting factor measuring the distance between η 2 and η ∨ 1 ).
In other words, (96) is substituted by Note that the expansion parameter − q t = t is exactly the same as in the Poincare polynomials of the Khovanov-Rozansky complexes used in the definition of superpolynomials [24].
In the remaining part of this section we provide examples of (97) for various cases, however, a general formula for the coefficients B ζ1,ζ2 (R,P ) (A, q, t) is still missed.

Conjugate representations
It turns out that the property of conjugation representations (72) is correct not only on Schur, but, for a wide class of representations S, also on the whole Macdonald locus in the space G of the time variables g k , and It remains to describe S. It turns out that S is the set of Young diagrams that consist of no more then two rectangles. Let us note that, at any concrete N , the Young diagram (R, P ) is conjugate to (P, R) at the same N . For instance, ([2, 1], [2]) at N = 4 is the Young diagram [4,3,2], and its conjugate [4, 3, 2] = [4, 2, 1] is just ( [2], [2,1] at N = 4. In this case, the property (98) is not satisfied. It also means that, when (98) is satisfied, i.e. for rectangular R and P , there is an identity R and P are rectangular (99) For instance, it is correct for R and P being symmetric and antisymmetric representations. It implies that the corresponding formulas for symmetric and antisymmetric composite Macdonald polynomials in the next subsection turns to be related with each other.

Answers for symmetric and antisymmetric representations
For symmetric and antisymmetric representations, there are general formulas for the coefficients B ζ1,ζ2 (R,P ) (A, q, t).
The ratio in front of the product in this formula substitutes the multiplier {Aq r+p } in the product by {Aq r+p−2i }. This formula is an illustration of (ii) and (iii): the Schur level selection rule η 2 = η ∨ 1 is violated, but deviations are damped by peculiar factorials q i−1 In variance with (101), this expression does not automatically vanish for i > min(r, p). The role of the ratio in front of the product is similar to that in (101).

The case of R = [2, 1]
When R = [2, 1], the composite Macdonald polynomials already can not be presented by a combination of the skew Macdonald polynomials. This is a new phenomenon, and we discuss it in some details. but it is not proportional to the skew Macdonald polynomial at finite A. Moreover, the deviation depends on s, i.e. is not universal. Note that at this level, there is just one skew Macdonald polynomial, thus one can not cure the deviation from it by taking linear combinations.
In particular, at q = t the last item in (113) does not contribute: α ∅ vanishes. Also at q = t the penultimate term should be Schur [ in the limit of small/large A.
The message that follows from these examples is very clear: (a) what exists in general is a decomposition of the composite Macdonald polynomial into the ordinary ones, but, at finite A, it can not be reduced to a decomposition into the skew Macdonald polynomials, (b) however, at A ±1 −→ ∞ such a skew Macdonald decomposition exists at arbitrary t and q, (c) but at q = t this decomposition involves the terms with arbitrary sub-diagrams, R/η 1 and P/η 2 , not restricted by the constraint η 2 = η ∨ 1 , but only to |η 2 | = |η 1 |, the restriction/correlation appears only at t = q.

Conclusion
In this paper, we discussed the definition of Kerov functions for the composite representations (R, S). In the case of Schur functions, such a definition is provided by the Koike formula (2) of [7][8][9] and it is crucially important for the study of HOMFLY polynomials [12]. However, its counterpart is not known even in the Macdonald case, what is a serious obstacle for extending the results of [12] to superpolynomials. The origin of difficulties is highlighted by the study in the general setting, i.e. in the Kerov setting. Our natural conjecture in this paper is that, in the Kerov case, the formula involves a double sum over all diagrams, which precede R and P in the lexicographical ordering (including smaller size ones), and then some simplifications occur at the Macdonald and Schur levels: The main subjects relevant to this story, which we only introduced, and which should be developed much further seem to be: Of special significance is search for other interesting loci where the Kerov functions acquire special properties and thus provide yet unknown multi-parametric generalizations of Macdonald polynomials, the obvious option are the 3-Macdonald polynomials [25], the hypothetical 3-Schur functions [26] and/or the characters of the Pagoda [27], as well as generalized characters needed in tensor models [28].
We hope that this paper, together with [6], proves that development of computer methods makes the very difficult and long neglected topic of Kerov functions available for efficient investigation, and we can expect many new results in the near future.