1 Introduction

Let \(G_{N}^{(r)} (q)\) denote the number of flags \(0 = V_{0} \subseteq \cdots \subseteq V_{r} = \mathbf{F}_{q}^{N}\) of length r in an N-dimensional vector space over a field with q elements where repetitions are allowed. Then \(G_{N}^{(r)} (q)\) is a polynomial in q, a so-called generalized Galois number [23]. In particular, when r=2, these are the Galois numbers which give the total number of subspaces of an N-dimensional vector space over a finite field [5].

The generalized Galois numbers are a biparametric family of polynomials, each with nonnegative integral coefficients, in the parameters N and r. We want to analyze their limiting properties as those parameters become large. Viewing each generalized Galois number as a discrete distribution on the real line, we determine the asymptotic behavior of this biparametric family of distributions. Our first result is the following:

Theorem 3.5

For r≥2 and NN, let G N,r be a random variable with probability generating function \(\mathrm {E}[q^{G_{N,r}}] = r^{-N} \cdot G_{N}^{(r)} (q)\). Then,

For fixed r and N→∞, the distribution of the random variable

$$\frac{G_{N,r} - \mathrm {E}[G_{N,r}]}{\operatorname {Var}(G_{N,r})^{\frac{1}{2}}} $$

converges weakly to the standard normal distribution.

Furthermore, we derive an exact formula in terms of weighted inversion statistics on the descent classes of the symmetric group and derive the asymptotic behavior with respect to the second parameter:

Theorem 4.1

Consider the Galois number \(G_{N}^{(r)}(q) \in \mathbf{N}[q]\) for r≥2 and NN. Let \(\mathfrak{S}_{N}\) be the symmetric group on N elements, and for a permutation \(\pi \in \mathfrak{S}_{N}\), denote by \(\operatorname {inv}(\pi)\) its number of inversions and by \(\operatorname {des}(\pi)\) the cardinality of its descent set D(π). Then,

$$G_N^{(r)}(q) = \sum_{\pi \in \mathfrak{S}_N} \binom{N+r-1-\operatorname {des}(\pi)}{N} q^{\operatorname {inv}(\pi)}. $$

For fixed N and r→∞, we have

$$\frac{N !}{r^N} \cdot G_N^{(r)}(q) \to \sum _{\pi \in \mathfrak{S}_N} q^{\operatorname {inv}(\pi)} = [N]_q!. $$

To state one application, our exact formula in Theorem 4.1 allows us to reinterpret the asymptotic behavior of the numbers of equivalence classes of linear q-ary codes [7, 8, 24, 25] under permutation equivalence \((\mathfrak{S})\), monomial equivalence \((\mathfrak{M})\), and semi-linear monomial equivalence (Γ) as follows:

To state one application, our exact formula in Theorem 4.1 allows us to reinterpret the asymptotic behavior of the numbers of equivalence classes of linear q-ary codes [7, 8, 24, 25] under permutation equivalence \((\mathfrak{S})\), monomial equivalence \((\mathfrak{M})\), and semi-linear monomial equivalence (Γ) as follows:

Corollary 5.2

The number of linear q-ary codes of length n up to equivalence \((\mathfrak{S})\), \((\mathfrak{M})\), and (Γ) is given asymptotically, as q is fixed and n→∞, by

(1.1)
(1.2)
(1.3)

where a=|Aut(F q )|=log p (q) with \(p = \operatorname {char}(\mathbf{F}_{q})\). In particular, the numerator of the asymptotic numbers of linear q-ary codes is the (weighted) inversion statistic on the permutations having at most 1 descent.

The organization of our article is as follows. Since the generalized Galois numbers are a specialization of the generalized Rogers–Szegő polynomials [23], which are generating functions of q-multinomial coefficients [17, 18, 22], we summarize the statistical behavior of the q-multinomial coefficients in Sect. 2. The determination of mean and variance for generalized Galois numbers and of the higher cumulants for q-multinomial coefficients in Sect. 3, allow us to prove the asymptotic normality of the generalized Galois numbers (Theorem 3.5) through the method of moments. In Sect. 4 we analyze the combinatorial interpretation of q-multinomial coefficients in terms of inversion statistics on permutations [21]. Based on our interpretation of the generalized Galois numbers as weighted inversion statistics on the descent classes of the symmetric group, we describe their limiting behavior toward the Mahonian inversion statistic (Theorem 4.1). We conclude with applications of our results in Sect. 5. That is, we derive further statistical aspects of generalized Rogers–Szegő polynomials in Corollary 5.1, reinterpret the asymptotic behavior of the numbers of linear q-ary codes in Corollary 5.2, and discuss implications for affine Demazure modules in Corollary 5.6 and joint probability generating functions of descent-inversion statistics (5.10), (5.11).

2 Notation and preliminaries

We denote by N the set of nonnegative integers {0,1,2,3,…}. Let q be a variable, NN, and k=(k 1,…,k r )∈N r. The q-multinomial coefficient is defined as

$$ \genfrac {[}{]}{0pt}{}{N}{\mathbf{k}}_q = \begin{cases} \frac{[N]_q !}{[k_1]_q ! \ldots [k_r]_q !} & \text{if } k_1 + \cdots + k_r = N, \\ 0 & \text{otherwise} . \end{cases} $$
(2.1)

Here, \([k]_{q} ! = \prod_{i=1}^{k} \frac{1-q^{i}}{1-q}\) denotes the q-factorial. Note that the q-multinomial coefficient is a polynomial in q and

$$\genfrac {[}{]}{0pt}{}{N}{\mathbf{k}}_q \bigg|_{q=1} = \binom{N}{\mathbf{k}} = \frac{N!}{k_1 ! \cdots k_r !}. $$

For NN, the generalized Nth Rogers–Szegő polynomial \(H^{(r)}_{N}(\mathbf{z}, q) \in \mathbf{C}[z_{1}, \ldots ,\allowbreak z_{r}, q]\) is the generating function of the q-multinomial coefficients:

$$H^{(r)}_N(\mathbf{z} , q) = \sum_{\mathbf{k} \in \mathbf{N}^r} \genfrac {[}{]}{0pt}{}{N}{\mathbf{k}}_q {\mathbf{z}^\mathbf{k}} . $$

Here we use multiexponent notation \({\mathbf{z}^{\mathbf{k}}} = z_{1}^{k_{1}} \cdots z_{r}^{k_{r}}\) for k=(k 1,…,k r )∈N r. Note that by our definition of the q-multinomial coefficients it is convenient to suppress the condition k 1+⋯+k r =N in the summation index.

As described in [23], the q-multinomial coefficient \(\genfrac {[}{]}{0pt}{}{N}{\mathbf{k}}_{q}\) counts the number of flags \(0 = V_{0} \subseteq \cdots \subseteq V_{r} = \mathbf{F}_{q}^{N}\) subject to the conditions dim(V i )=k 1+⋯+k i , and consequently the specialization of the generalized Rogers–Szegő polynomial \(H^{(r)}_{N} (\mathbf{z},q)\) at z=1=(1,…,1) counts the total number of flags of subspaces of length r in \(\mathbf{F}_{q}^{N}\). This number (a polynomial in q) is called a generalized Galois number and denoted by \(G_{N}^{(r)}(q)\). In particular, when r=2, the specializations of the Rogers–Szegő polynomials are the Galois numbers G N (q) which count the number of subspaces in \(\mathbf{F}_{q}^{N}\) [5].

We will need notation from the context of symmetric groups. D(π) is the descent set of π, \(\mathcal{D}_{T}\) the descent class, \(\operatorname {des}(\pi)\) the number of descents, and \(\operatorname {inv}(\pi)\) the number of inversions of π, i.e.,

The sign ∼ refers to asymptotic equivalence, that is, for f,g:NR >0, we write f(n)∼g(n) if lim n→∞ f(n)/g(n)=1. We write f(n)=O(g(n)) if there exists a constant C>0 such that f(n)≤Cg(n) for all sufficiently large n, and f(n)=o(g(n)) if lim n→∞ f(n)/g(n)=0.

Let us recollect some known results about statistics of q-multinomial coefficients. Note that one has the usual differentiation method.

Proposition 2.1

Let X be a discrete random variable with probability generating function E[q X]=f(q)∈R[q]. Then,

Via Proposition 2.1 one can prove from the definition (2.1) of the q-multinomial coefficient:

Proposition 2.2

(See [1, Eqs. (1.9) and (1.10)])

For k=(k 1,…,k r )∈N r, let X N,k be a random variable with probability generating function \(\mathrm {E}[q^{X_{N,\mathbf{k}}}] = \binom{N}{\mathbf{k}}^{-1} \cdot \genfrac {[}{]}{0pt}{}{N}{\mathbf{k}}_{q}\). Then,

Here, e i (k) denotes the ith elementary symmetric function in the variables k=(k 1,…,k r ).

3 Asymptotic normality of Galois numbers

Let us start by computing mean and variance of the generalized Galois numbers.

Lemma 3.1

Let G N,r be a random variable with probability generating function \(\mathrm {E}[q^{G_{N,r}}] = r^{-N} \cdot G_{N}^{(r)} (q)\). Then,

Proof

By Proposition 2.1 we have to compute the value of the derivatives \(\frac{d}{ dq}\) and \(\frac{d^{2}}{dq^{2}}\) of \(G_{N}^{(r)}(q) = H^{(r)}_{N}(\mathbf{1},q)\) evaluated at q=1. Since \(H^{(r)}_{N}(\mathbf{1},q) = \sum_{\mathbf{k} \in \mathbf{N}^{r}}\genfrac {[}{]}{0pt}{}{N}{\mathbf{k}}_{q}\), they can be computed from Proposition 2.2 via index manipulations in sums involving multinomial coefficients. We will need the identities

$$ \sum_{\mathbf{k} \in \mathbf{N}^r} \binom{N}{\mathbf{k}} e_s (\mathbf{k}) = s! \binom{N}{s} \binom{r}{s} r^{N-s} $$
(3.1)

and

$$ \sum_{\mathbf{k} \in \mathbf{N}^r} \binom{N}{\mathbf{k}} e_2 (\mathbf{k})^2 = r^N \frac{N^2 (r-1)^2 - N(r-1)^2+2(r-1)}{4 r^2} N(N-1) . $$
(3.2)

The last identity follows from

$$e_2^2 = \frac{1}{2} \bigl(p_4 - e_1^4 +4 e_2 e_1^2 -4 e_3 e_1 + 4 e_4\bigr), $$

where p s denotes the sth power sum, and

Now, \(G_{N}^{(r)}(1) = r^{N}\) and, by (3.1),

For the variance, we will also need (3.2). That is,

 □

In order to prove asymptotic normality, we use the well-known method of moments [3, Theorem 4.5.5]. We will need some preparatory statements. First, from the description through elementary symmetric polynomials in Proposition 2.2 we can derive the asymptotic behavior of the first two central moments of the central q-multinomial coefficients and their square-root distant neighbors.

Proposition 3.2

Let \(\mathbf{k}_{\mathrm {c}}= (k_{1}^{(N)} , \ldots , k_{r}^{(N)}) \in \mathbf{N}^{r}\) be a sequence such that \(k_{1}^{(N)} + \cdots + k_{r}^{(N)} = N\) and \(\lfloor \frac{N}{r} \rfloor \leq k_{i}^{(N)} \leq \lceil \frac{N}{r} \rceil\). Let \(X_{N,\mathbf{k}_{\mathrm {c}}}\) be a random variable with probability generating function \(\mathrm {E}[q^{X_{N,\mathbf{k}_{\mathrm {c}}}}] = \binom{N}{\mathbf{k}_{\mathrm {c}}}^{-1} \cdot \genfrac {[}{]}{0pt}{}{N}{\mathbf{k}_{\mathrm {c}}}_{q}\). Then, as r is fixed and N→∞,

(3.3)
(3.4)

Furthermore, for k c as above and any sequence \(\mathbf{s} = (s_{1}^{(N)} , \ldots , s_{r}^{(N)}) \in \mathbf{N}^{r}\) such that \(s^{(N)}_{1} + \cdots + s^{(N)}_{r} = N\) and \(\lVert \mathbf{s} - \mathbf{k}_{\mathrm {c}} \rVert = O(\sqrt{N})\), we have, as r is fixed and N→∞,

(3.5)
(3.6)

Proof

The asymptotic equivalences (3.3) and (3.4) can be computed from the definition of the elementary symmetric polynomials (recall Proposition 2.2). Recall that we are dealing with sequences of k c’s and s’s, and that all asymptotics refer to fixed r and N→∞. We will treat the first moment for illustration purposes:

Furthermore, for the s’s in question one has

Therefore, the claimed asymptotic equivalences (3.5) and (3.6) follow immediately from Proposition 2.2 and the exhibited quadratic and cubic asymptotic growth (in N) of \(\mathrm {E}[X_{N,\mathbf{k}_{\mathrm {c}}}]\) and \(\operatorname {Var}(X_{N,\mathbf{k}_{\mathrm {c}}})\), respectively. □

By a method of Panny [15], we can determine the cumulants of q-multinomial coefficients explicitly. The same technique has already been applied by Prodinger [16] to obtain the cumulants of q-binomial coefficients. The exact formula will be stated as (3.9), but in the sequel we will only need the following asymptotic statement:

Lemma 3.3

Let k=k (N)N r be any sequence such that \(k_{1}^{(N)} + \cdots + k_{r}^{(N)} = N\). For each NN, let X N,k be a random variable with probability generating function \(\mathrm {E}[q^{X_{N,\mathbf{k}}}] = \binom{N}{\mathbf{k}}^{-1} \cdot \genfrac {[}{]}{0pt}{}{N}{\mathbf{k}}_{q} \). Then, for all j≥1, the jth cumulant of X N,k is of order

(3.7)

as N→∞. Furthermore, if \(\lVert \mathbf{k} - \mathbf{k}_{\mathrm {c}}\rVert = O(\sqrt{N})\) for k c as above, then as r and α are fixed, and N→∞,

(3.8)

Proof

For k≥1, let Y k be a random variable with probability generating function \(\mathrm {E}[q^{Y_{k}}] = \frac{[k]_{q}!}{k!}\). Denote the jth cumulant of Y k by κ j,k . Panny [15, bottom of p. 176] shows that

$$\kappa_{j,k} = \begin{cases} \frac{k(k-1)}{4}, & j = 1, \\ \noalign {\vspace {2pt}} \frac{B_j}{j} \cdot ( \frac{B_{j+1}(k+1)}{j+1} - k ), & j \geq 2, \end{cases} $$

where B j (x) denotes the jth Bernoulli polynomial evaluated at x, and B j denotes the jth Bernoulli number. Note that B j =0 for odd j≥3.

Our random variable X N,k has the probability generating function

$$\mathrm {E}\bigl [q^{X_{N,\mathbf{k}}}\bigr ] = \binom{N}{\mathbf{k}}^{-1} \cdot \genfrac {[}{]}{0pt}{}{N}{\mathbf{k}}_q = \frac{\frac{[N]_q!}{N!}}{\frac{[k_1]_q!}{k_1!} \cdots \frac{[k_r]_q!}{k_r!}} . $$

Hence, its cumulant generating function is

$$\log \mathrm {E}\bigl [e^{tX_{N,\mathbf{k}}}\bigr ] = \log \frac{[N]_{e^t}}{N!} - \log \frac{[k_1]_{e^t}}{k_1!} - \cdots - \log \frac{[k_r]_{e^t}}{k_r!}, $$

and for j≥2, its cumulants are

$$ \everymath{\displaystyle }\begin{array}[b]{ll} \kappa_j(X_{N,\mathbf{k}}) &= \kappa_{j,N} - \kappa_{j,k_1} - \cdots - \kappa_{j,k_r} \\ \noalign {\vspace {7pt}} &= \frac{B_j}{j(j + 1)} \bigl( B_{j + 1}(N + 1) \\ \noalign {\vspace {7pt}} & \quad {}- B_{j + 1}(k_1 + 1) - \cdots - B_{j + 1}(k_r+ 1) \bigr) . \end{array} $$
(3.9)

Since B j (x)∼x j as x→∞ and each k i =O(N), the first part of the lemma follows (the case j=1 can be treated similarly).

For the second part, suppose \(\lVert \mathbf{k} - \mathbf{k}_{\mathrm {c}}\rVert= O(\sqrt{N})\). Since \(k_{i}^{(N)} = \frac{N}{r} + O(\sqrt{N})\), it follows that \(B_{j}(k_{i}^{(N)} + 1) \sim ( \frac{N}{r} + O(\sqrt{N}) )^{j} \sim \frac{1}{r^{j}} N^{j} \). Hence, if we consider the nonvanishing even cumulants κ 2β (X N,k ), we have

i.e., κ 2β (X N,k ) is exactly of order 2β+1 in N.

We abbreviate κ j (X N,k ) by κ j for convenience. Recall that the moment generating function is the exponential of the cumulant generating function. Consequently, one has the following standard relation between higher moments and cumulants:

$$ \mathrm {E}\bigl [X_{N,\mathbf{k}}^\alpha\bigr ] = \sum_{\substack{\pi_1 + 2\pi_2 + \cdots + \alpha \pi_\alpha = \alpha \\ \pi_i \in \{ 0,1,\ldots , \alpha\}}} \biggl( \frac{\kappa_1}{1!} \biggr)^{\pi_1} \cdots \biggl( \frac{\kappa_\alpha}{\alpha!} \biggr)^{\pi_\alpha} \frac{\alpha!}{\pi_1! \cdots \pi_\alpha!} . $$
(3.10)

Since the jth Bernoulli polynomial has degree j and the higher cumulants κ j of odd order vanish, we have (cf. [15, Sect. 3])

This leads to the asymptotic expansion

$$ \mathrm {E}\bigl [X_{N,\mathbf{k}}^{\alpha}\bigr ] = \kappa_1(X_{N,\mathbf{k}})^\alpha + O \bigl(N^{2\alpha-1}\bigr) . $$
(3.11)

But κ 1(X N,k )=E[X N,k ], and since \(\| \mathbf{k} - \mathbf{k}_{\mathrm {c}}\| = O(\sqrt{N}) \), our Proposition 3.2 yields

$$\kappa_1(X_{N,\mathbf{k}}) \sim \frac{r-1}{4r} N^2 $$

as N→∞. This finishes the proof. □

For our proof of Theorem 3.5 by the method of moments, we need to show that the moments of the standardized central q-multinomial coefficients converge to the moments of the standard normal distribution. We will show this more generally for q-multinomial coefficients in \(O(\sqrt{N})\)-distance to the center, since our arguments yield this without extra effort. Similar results have been obtained by Canfield, Janson, and Zeilberger [1] (for a history concerning this distribution see their erratum).

Proposition 3.4

Let \(\mathbf{k}_{\mathrm {c}}= (k_{1}^{(N)} , \ldots , k_{r}^{(N)}) \in \mathbf{N}^{r}\) be a sequence such that \(k_{1}^{(N)} + \cdots + k_{r}^{(N)} = N\) and \(\lfloor \frac{N}{r} \rfloor \leq k_{i}^{(N)} \leq \lceil \frac{N}{r} \rceil\). Furthermore, consider a sequence \(\mathbf{s} = (s_{1}^{(N)} , \ldots , s_{r}^{(N)}) \in \mathbf{N}^{r}\) such that \(s^{(N)}_{1} + \cdots + s^{(N)}_{r} = N\) and \(\lVert \mathbf{s} - \mathbf{k}_{\mathrm {c}} \rVert = O(\sqrt{N})\). Let X N,s be a random variable with probability generating function \(\mathrm {E}[q^{X_{N,\mathbf{s}}}] = \binom{N}{\mathbf{s}}^{-1} \cdot \genfrac {[}{]}{0pt}{}{N}{\mathbf{s}}_{q} \). Then the moments of the random variable

$$\tilde{X}_{N,\mathbf{s}} = \frac{X_{N,\mathbf{s}} - \mathrm {E}[X_{N,\mathbf{s}}]}{\sqrt{\operatorname {Var}{X_{N,\mathbf{s}}}}} $$

converge to the moments of the standard normal distribution. In particular, the distribution of \(\tilde{X}_{N,\mathbf{s}}\) converges weakly to the standard normal distribution.

Proof

Once we have shown the convergence of the moments, the weak convergence of the distributions follows by the method of moments. Hence we will concentrate on showing the convergence of the moments. Since the moments of a random variable depend polynomially, in particular continuously, on its cumulants, it is sufficient to show that the cumulants of \(\tilde{X}_{N,\mathbf{s}}\) converge to the cumulants of the standard normal distribution, 0,1,0,0,0,… . It follows directly from the definition of cumulants that

Hence, for j≥3,

$$\kappa_j(\tilde{X}_{N,\mathbf{s}}) = \frac{\kappa_j(X_{N,\mathbf{s}})}{\operatorname {Var}(X_{N,\mathbf{s}})^{j/2}} \\ = O\bigl(N^{j + 1 - 3j/2}\bigr) \to 0 $$

by (3.4), (3.6), and (3.7). □

We are ready to prove the advertised asymptotic normality.

Theorem 3.5

For r≥2 and NN, let G N,r be a random variable with probability generating function \(\mathrm {E}[q^{G_{N,r}}] = r^{-N} \cdot G_{N}^{(r)} (q)\). Then,

For fixed r and N→∞, the distribution of the random variable

$$\frac{G_{N,r} -\mathrm {E}[G_{N,r}]}{\operatorname {Var}(G_{N,r})^{\frac{1}{2}}} $$

converges weakly to the standard normal distribution.

Proof

We will deploy the method of moments (cf. [3, Theorem 4.5.5]). All asymptotics refer to N→∞. We will show that for any α,

$$ \mathrm {E}\bigl [G_{N,r}^\alpha\bigr ] \sim \mathrm {E}\bigl [X_{N,\mathbf{k}_\mathrm{c}}^\alpha\bigr ] . $$
(3.12)

Note that once we have shown (3.12), the theorem follows by the method of moments: We have already shown in Proposition 3.4 that the moments of the standardized \(X_{N,\mathbf{k}_{\mathrm{c}}}\) converge to the moments of the standard normal distribution; hence the same holds for the standardized G N,r by linearity of the expected value and the binomial theorem.

We use the notation

$$f(N) \lesssim g(N) \quad \text{if} \ \limsup_{N \to \infty} \frac {f(N)}{g(N)} \leq 1 . $$

In order to verify (3.12), we will show \(\mathrm {E}[G_{N,r}^{\alpha}] \gtrsim \mathrm {E}[X_{N,\mathbf{k}_{\mathrm{c}}}^{\alpha}]\) and \(\mathrm {E}[G_{N,r}^{\alpha}] \lesssim \mathrm {E}[X_{N,\mathbf{k}_{\mathrm{c}}}^{\alpha}]\) separately.

Let us start with \(\mathrm {E}[G_{N,r}^{\alpha}] \gtrsim \mathrm {E}[X_{N,\mathbf{k}_{\mathrm{c}}}^{\alpha}]\). For this, it is sufficient to prove that for all ε>0, there is an N 0N such that

$$ \mathrm {E}\bigl [G_{N,r}^\alpha\bigr ] \geq (1 - \varepsilon ) \mathrm {E}\bigl [X_{N,\mathbf{k}_\mathrm {c}}^\alpha\bigr ] $$
(3.13)

for all NN 0. Let ε>0. For NN, let U N ⊂{xR r:x 1+⋯+x r =N} be the ball around \((\frac{N}{r}, \ldots, \tfrac{N}{r})\) of minimal radius such that \(\sum_{\mathbf{k} \in U_{N}} \binom{N}{\mathbf{k}} \geq \sqrt{1-\varepsilon } \cdot r^{N} \). By the central limit theorem for ordinary multinomial coefficients, the radii of U N are proportional to \(\sqrt{N}\) up to an error of order O(1). Choose a sequence k min with \(\mathbf{k}_{\min}^{(N)} \in U_{N}\) such that \(\mathrm {E}[X_{N,\mathbf{k}_{\min}}^{\alpha}] = \min_{\mathrm{k} \in U_{N}} \mathrm {E}[X_{N,\mathbf{k}}^{\alpha}]\). By (3.8), \(\mathrm {E}[X_{N,\mathbf{k}_{\min}}^{\alpha}] \sim \mathrm {E}[X_{N,\mathbf{k}_{\mathrm {c}}}^{\alpha}]\). Hence there is an N 0 such that

$$\mathrm {E}\bigl [X_{N,\mathbf{k}_{\min}}^\alpha\bigr ] \geq \sqrt{1-\varepsilon } \cdot \mathrm {E}\bigl [X_{N,\mathbf{k}_{\mathrm {c}}}^\alpha\bigr ] $$

for all NN 0. Consequently,

We continue by showing \(\mathrm {E}[G_{N,r}^{\alpha}] \lesssim \mathrm {E}[X_{N,\mathbf{k}_{\mathrm{c}}}^{\alpha}]\). We claim that for all sequences k=k (N),

$$ \mathrm {E}\bigl [X_{N,\mathbf{k}}^\alpha\bigr ] \lesssim \mathrm {E}\bigl [X_{N,\mathbf{k}_\mathrm{c}}^\alpha\bigr ] . $$
(3.14)

In order to show this, we will treat the summands of expression (3.10) for \(\mathrm {E}[X_{N,\mathbf{k}}^{\alpha}]\) individually. Let us start with the index π=(α,0,…,0), i.e., the summand κ 1(X N,k )α. Note that

$$ \max_{\mathbf{x} \in N\Delta_{r-1}} e_2(\mathbf{x}) = e_2 \biggl( \frac{N}{r}, \ldots, \frac{N}{r} \biggr) = \frac{r-1}{2r} N^2 . $$
(3.15)

Here NΔ r−1 denotes the dilated (r−1)-dimensional standard simplex in R r. Hence,

$$\kappa_1(X_{N,\mathbf{k}}) = \frac{e_2(\mathbf{k})}{2} \leq \frac{r-1}{4r} N^2 $$

for all N, so our summand is bounded from above by

$$\kappa_1(X_{N,\mathbf{k}})^\alpha \leq \biggl(\frac{r-1}{4r} N^2 \biggr)^\alpha . $$

Now, consider a summand of (3.10) with index π≠(α,0,…,0). By the same arguments that lead to (3.11) such a summand is O(N 2α−1). Since the number of summands (the number of partitions of α) does not depend on N, we have

This proves (3.14). We are ready to prove the final inequality \(\mathrm {E}[G_{N,r}^{\alpha}] \lesssim \mathrm {E}[X_{N,\mathbf{k}_{\mathrm{c}}}^{\alpha}] \): Choose a sequence \(\mathbf{k}_{\max} = \mathbf{k}_{\max}^{(N)}\) in N r such that \(\mathrm {E}[X_{N,\mathbf{k}_{\max}}^{\alpha}] = \max_{\mathbf{k}} \mathrm {E}[X_{N,\mathbf{k}}^{\alpha}]\). Then

This verifies (3.12) and finishes the proof of the theorem. □

4 Galois numbers and inversion statistics

Let \(\mathcal{P}_{N,r}\) denote the set of partitions of an arbitrary integer into up to r parts of size up to N. Let \(p_{N,r} = \lvert \mathcal{P}_{N,r} \rvert\) denote the number of such partitions. Then

$$ p_{N,r} = \binom{N+r}{N} . $$
(4.1)

For T⊂{1,…,N} with t:=|T|≤r, let

$$\mathcal{P}_{N,r}^T = \bigl\{ \lambda \in \mathcal{P}_{N,r} : \{ \lambda_1, \ldots, \lambda_r \} \supset T \bigr\} $$

denote the subset of \(\mathcal{P}_{N,r}\) consisting of all the partitions such that each element of T occurs as the size of a part. Then removing one part of each size in T defines a bijection \(\mathcal{P}_{N,r}^{T} \to \mathcal{P}_{N,r-t}\). Hence,

$$ \bigl\vert \mathcal{P}_{N,r}^T \bigr\vert = p_{N,r-t} \stackrel{(4.1)}{=} \binom{N+r-t}{N}. $$
(4.2)

Note that we adhere to a more restrictive definition of binomial coefficients, namely \(\binom{a}{b} = 0\) unless 0≤ba.

Theorem 4.1

Consider the Galois number \(G_{N}^{(r)}(q) \in \mathbf{N}[q]\) for r≥2 and NN. Let \(\mathfrak{S}_{N}\) be the symmetric group on N elements, and for a permutation \(\pi \in \mathfrak{S}_{N}\), denote by \(\operatorname {inv}(\pi)\) its number of inversions and by \(\operatorname {des}(\pi)\) the cardinality of its descent set D(π). Then,

$$G_N^{(r)}(q) = \sum_{\pi \in \mathfrak{S}_N} \binom{N+r-1-\operatorname {des}(\pi)}{N} q^{\operatorname {inv}(\pi)} . $$

For fixed N and r→∞, we have

$$\frac{N !}{r^N} \cdot G_N^{(r)}(q) \to \sum _{\pi \in \mathfrak{S}_N} q^{\operatorname {inv}(\pi)} = [N]_q! . $$

Proof

\(G_{N}^{(r)}(q) = H^{(r)}_{N}(\mathbf{1}, q)\), and by definition

$$H^{(r)}_N(\mathbf{1}, q) = \sum _{\substack{\mathbf{k} \in \mathbf{N}^r \\ k_1 + \cdots + k_r = N}} \genfrac {[}{]}{0pt}{}{N}{\mathbf{k}}_q . $$

Note that there is a bijection \(\mathcal{P}_{N,r-1} \to \{ \mathbf{k} \in \mathbf{N}^{r} : k_{1} + \cdots + k_{r} = N \}\) given by

$$\mathbf{k}_\lambda := (N - \lambda_1, \lambda_1 - \lambda_2, \ldots, \lambda_{r-2}-\lambda_{r-1}, \lambda_{r-1}) . $$

Hence,

$$H^{(r)}_N(\mathbf{1}, q) = \sum_{\lambda \in \mathcal{P}_{N,r-1}} \genfrac {[}{]}{0pt}{}{N}{\mathbf{k}_\lambda}_q. $$

By [21, Chap. 2, (20)],

$$\genfrac {[}{]}{0pt}{}{N}{{\mathbf{k}_\lambda}}_q = \sum _{\substack{\pi \in \mathfrak{S}_N \\ D(\pi) \subset \{\lambda_1, \ldots, \lambda_{r-1}\}}} q^{\operatorname {inv}(\pi)} . $$

Hence,

The last equality follows from (4.2) which implies our first assertion. As for the second claim, note that as N is fixed, 0≤tN−1, and r→∞, we have

$$\binom{N+r-1-t}{N} \sim \frac{r^N}{N!} . $$

The stated decomposition of the inversion statistic as a q-factorial is well known. □

5 Applications and discussions

5.1 Rogers–Szegő polynomials

Our Lemma 3.1 is a sufficient ingredient to determine the covariance of the overall distribution of coefficients in the generalized Nth Rogers–Szegő polynomial.

Corollary 5.1

Given N,r>0, let (X;Y)=(X 1,…,X r ;Y) be a random vector with probability generating function \(\mathrm {E}[\mathbf{z}^{\mathbf{X}} q^{Y}] = r^{-N}H^{(r)}_{N}(\mathbf{z},q)\). Then, the covariance of (X;Y) is given by

where Σ(X) is the covariance of the multinomial distribution.

Proof

The diagonal (block) entries are clear, since the specialization at (z,1) gives the multinomial distribution, whereas the specialization at (1,q) is exactly the generalized Galois number studied in Lemma 3.1. Therefore, we only have to prove that \(\operatorname {Cov}(X_{i},Y) = 0\) for i=1,…,r, which can be shown as follows: By symmetry, it is clear that all \(\operatorname {Cov}(X_{i},Y)\) coincide. As X 1+⋯+X r =N almost surely, it follows that

$$0 = \operatorname {Cov}(N,Y) = N \operatorname {Cov}(X_1, Y) . $$

 □

5.2 Linear q-ary codes

Consider the classical Galois numbers \(G_{n}(q) = G_{n}^{(2)}(q)\) that count the number of subspaces of \(\mathbf{F}_{q}^{n}\). For a general prime power q, Hou [7, 8] studies the number of equivalence classes N n,q of linear q-ary codes of length n under three notions of equivalence: permutation equivalence (\(\mathfrak{S}\)), monomial equivalence (\(\mathfrak{M}\)), and semi-linear monomial equivalence (Γ). He proves that

where a=|Aut(F q )|=log p (q) with \(p = \operatorname {char}(\mathbf{F}_{q})\). The case of binary codes up to monomial equivalence, \(N_{n,2}^{\mathfrak{S}} \sim \frac{G_{n}(2)}{n!}\), is previously derived by Wild [24, 25]. Now, the following corollary is immediate from our Theorem 4.1.

Corollary 5.2

The number of linear q-ary codes of length n up to equivalence \((\mathfrak{S})\), \((\mathfrak{M})\), and (Γ) is given asymptotically, as q is fixed and n→∞, by

(5.1)
(5.2)
(5.3)

where a=|Aut(F q )|=log p (q) with \(p = \operatorname {char}(\mathbf{F}_{q})\). In particular, the numerator of the asymptotic numbers of linear q-ary codes is the (weighted) inversion statistic on the permutations having at most 1 descent.

5.3 The symmetric group acting on subspaces over finite fields

Consider the character \(\chi_{N}(\tau) = \# \{ V \subset \mathbf{F}_{q}^{N} : \tau V = V \}\) of the symmetric group \(\mathfrak{S}_{N}\) acting on \(\mathbf{F}_{q}^{N}\) by permutation of coordinates. Lax [12] shows that the normalized character χ N (τ)/G N (q) asymptotically approaches the character which takes the value 1 on the identity and 0 otherwise.

Corollary 5.3

Consider the character χ N (τ) of the symmetric group \(\mathfrak{S}_{N}\) acting on \(\mathbf{F}_{q}^{N}\) by permutation of coordinates. Then, as N→∞, the normalized character

(5.4)

In particular, the character χ N approaches asymptotically the (weighted) inversion statistic on the permutations having at most 1 descent.

5.4 Affine Demazure modules

We refer the reader to [2, 4, 10] for the basic facts about the representation theory of affine Kac–Moody algebras and Demazure modules, and the notation used. Now, according to [6, Eq. (3.4)] and [19, Theorems 6 and 7], certain Demazure characters can be described via generalized Rogers–Szegő polynomials.

Lemma 5.4

Let r,NN, r≥2, and 0≤i<r, iNmodr. Let q=e δ, \(\mathbf{z} = (e^{\varLambda_{1} - \varLambda_{0}}, e^{\varLambda_{2} -\varLambda_{1}}, \ldots , e^{\varLambda_{r-1} - \varLambda_{r-2}}, e^{\varLambda_{0} -\varLambda_{r-1}})\), and

Then, the character of the \(\widehat{\mathfrak{sl}}_{r}\) Demazure module \(V_{-N \omega_{1}}(\varLambda_{0})\) is given by

$$\mathrm {ch}\bigl (V_{-N \omega_1}(\varLambda_0)\bigr ) = e^{\varLambda_0 - d_r(N)\delta} \cdot H_N^{(r)}(\mathbf{z} , q) \in \mathbf{N}\bigl[\mathbf{z} , q^{-1}\bigr] . $$

Proof

Note that \(t_{-\omega_{1}} \,{=}\, s_{1} s_{2} \ldots s_{r-1} \sigma^{r-1}\) with σ being the automorphism of the Dynkin diagram of \(\widehat{\mathfrak{sl}}_{r}\) which sends 0 to 1 (see, e.g., [14, Sect. 2]). Furthermore, following [19, Sect. 2], we have \([N] = t_{-N\omega_{1}} \cdot \eta_{N}\) with the convention ση N =η N . Here, [N]=(N,0,…,0)∈N r denotes the one-row Young diagram, and η N the smallest composition of degree N. That is, when N=kr+i with 0≤i<r, we have η N =((k)ri,(k+1)i)∈N r. Then, by [19, Theorems 6 and 7]Footnote 1 and [6, Eq. (3.4)] we have

$$\mathrm {ch}\bigl (V_{-N \omega_1}(\varLambda_0)\bigr ) = e^{\varLambda_0 - d_r(N)\delta} \cdot P_{[N]}(\mathbf{z};q,0) = e^{\varLambda_0 - d_r(N)\delta} \cdot H_N^{(r)}(\mathbf{z} , q) , $$

where P [N](z;q,0) denotes the specialized symmetric Macdonald polynomial (see [13, Chap. VI]) associated to the partition [N]. □

Example 5.5

For r=2, consider the specialization of the Rogers–Szegő polynomial \(H_{4}^{(2)}(z,z^{-1},q)\) and the Demazure module \(V_{-4 \omega_{1}}(\varLambda_{0})\). Via Demazure’s character formula we obtain

Furthermore, by definition,

Hence, with \(\mathbf{z} = (e^{\varLambda_{1} - \varLambda_{0}}, e^{\varLambda_{0} - \varLambda_{1}})\), q=e δ and d 2(4)=4, we have the equality

$$\mathrm {ch}\bigl (V_{-4 \omega_1}(\varLambda_0)\bigr ) = e^{\varLambda_0 - d_2 (4)\delta} \cdot H_4^{(2)}\bigl(z,z^{-1},q\bigr), $$

as claimed.

The coefficient l in e is commonly referred to as the degree of a monomial in \(\mathrm {ch}(V_{-N \omega_{1}}(\varLambda_{0}))\). When d is a scaling element, the polynomial \(\mathrm {ch}(V_{-N \omega_{1}}(\varLambda_{0})) |_{\mathbf{C}d} \in \mathbf{N}[e^{\delta}]\) is called the basic specialization of the Demazure character (see [10, Sects. 1.5,10.8, and 12.2] for the terminology in the context of integrable highest weight representations of affine Kac–Moody algebras). Based on the relation described in Corollary 5.4, we summarize our main results, Theorem 3.5 and Theorem 4.1, in this language.

Corollary 5.6

For r≥2 and NN, consider the \(\widehat{\mathfrak{sl}}_{r}\) Demazure module \(V_{-N \omega_{1}}(\varLambda_{0})\). Let Γ N,r be a random variable with probability generating function \(\mathrm {E}[e^{\varGamma_{N,r}\delta}] = r^{-N} \cdot \mathrm {ch}(V_{-N \omega_{1}}(\varLambda_{0})) |_{\mathbf{C}d} \in \mathbf{Q}[e^{\delta}]\). Then, for 0≤i<r, iNmodr, we have

(5.5)
(5.6)

For fixed r and N→∞, the distribution of the random variable

(5.7)

converges weakly to the standard normal distribution \(\mathcal{N}(0,1)\). Furthermore, let \(\mathfrak{S}_{N}\) be the symmetric group on N elements, and for a permutation \(\pi \in \mathfrak{S}_{N}\), denote by \(\operatorname {inv}(\pi)\) its number of inversions and by \(\operatorname {des}(\pi)\) the cardinality of its descent set D(π). Then, with a N,r =N(N−1)/2−(Ni)(N+ir)/2r,

(5.8)

For fixed N and r→∞, we have

(5.9)

It is interesting to continue the investigation of the basic specialization of Demazure characters including Kac–Moody algebra types different from A. The starting point should be Ion’s article [9], which is a generalization of Sanderson’s work [19], and one should also consider [11]. In view of (5.9) and [9, 11], we propose the following conjecture:

Conjecture 5.7

Let X=A,B,D and rN. Consider the \(\widehat{X}_{r}\) Demazure module \(V_{-N \omega_{1}}(\varLambda_{0})\), and let \(d_{r}^{X}(N)\) be the maximal occurring degree. For fixed N and r→∞, it holds

$$\frac{\# W(X_N)}{\dim (V(\omega_1)^{\otimes N} )} \cdot \frac{\mathrm {ch}(V_{-N \omega_1}(\varLambda_0))|_{\mathbf{C}d}}{e^{\varLambda_0 - d_r^X (N) \delta}} \to \sum _{w \in W(X_N)} e^{l(w)\delta} . $$

Here, W(X N ) is the Weyl group of finite type X N , l:W(X N )→N is the length function, and V(ω 1) denotes the standard representation of the finite-dimensional Lie algebra of type X r .

Note that (5.9) proves the case X=A. It is interesting to investigate an analogue of (5.8) for the types B and D.

5.5 Descent-inversion statistics

Stanley [20] derived a generating function identity for the joint probability generating function of the descent-inversion statistic on the symmetric group \(\mathfrak{S}_{N}\):

$$\sum_{N=0}^\infty \sum_{\pi \in \mathfrak{S}_N} t^{\operatorname {des}(\pi)}q^{\operatorname {inv}(\pi)} \frac{u^N}{[N]_q !} = \frac{1-t}{\mathrm{Exp}_q (u(t-1))-t} , $$

where \(\mathrm{Exp}_{q} (x) = \sum q^{\binom{n}{2}} x^{n} / [n]_{q} ! \). Motivated by Theorem 4.1, we define a weighted joint probability generating function of the descent-inversion statistic on the symmetric group \(\mathfrak{S}_{N}\) by

$$ G_N^{(r)}(q,t) = \sum_{\pi \in \mathfrak{S}_N} \binom{N + r-1-\operatorname {des}(\pi)}{N} t^{\operatorname {des}(\pi)} q^{\operatorname {inv}(\pi)} . $$
(5.10)

Note that \(G_{N}^{(r)}(q,1) = G_{N}^{(r)}(q)\). It is interesting to investigate the generating function

$$ \sum_{N=0}^\infty G_N^{(r)}(q,t) \frac{u^N}{[N]_q !} , $$
(5.11)

possibly with refinements depending on r and t, and the t-deformed generalized Galois number \(G_{N}^{(r)}(q,t)\) itself.