SOME CONNECTIONS BETWEEN PERMUTATION CYCLES AND TOUCHARD POLYNOMIALS AND BETWEEN PERMUTATIONS THAT FIX A SET AND COVERS OF MULTISETS

We present a new proof of a fundamental result concerning cycles of random permutations which gives some intuition for the connection between Touchard polynomials and the Poisson distribution. We also introduce a rather novel permutation statistic and study its distribution. This quantity, indexed by m, is the number of sets of size m fixed by the permutation. This leads to a new and simpler derivation of the exponential generating function for the number of covers of certain multisets.


Introduction and Statement of Results
In this paper, we present a new and perhaps simpler proof of a fundamental result concerning cycles of random permutations which gives some intuition for the connection between Touchard polynomials and the Poisson distribution.We also introduce a rather novel permutation statistic and study its distribution.This quantity, indexed by m, is the number of sets of size m fixed by the permutation.This leads to a new and simpler derivation of the exponential generating function for the number of covers of certain multisets.
We begin by recalling some basic facts concerning Bell numbers and Touchard polynomials, and their connection to Poisson distributions.The facts noted below without proof can be found in many books on combinatorics; for example, in [12], [14].The Bell number B n denotes the number of partitions of a set of n distinct elements.Elementary combinatorial reasoning yields the recursive formula (1.1) denote the exponential moment generating function of {B n } ∞ n=0 .Using (1.1) it is easy to show that E B (x) = e x E B (x), from which it follows that (1.3) E B (x) = e e x −1 .
A random variable X has the Poisson distribution Pois(λ), λ > 0, if Ee tX denote the moment generating function of X, and let µ n;λ = EX n denote the nth moment of However a direct calculation gives 1) .
The Stirling number of the second kind n k denotes the number of partitions of a set of n distinct elements into k nonempty sets.We will need the formula (1.8) where (x ) is the falling factorial, and one defines It is enough to prove this for positive integers x, in which case x n is the number of functions f : [n] → X, where |X| = x.We now count such functions in another way.For each j ∈ [x], consider all those functions whose range contains exactly j elements.The inverse images of these elements give a partition of [n] into j nonempty sets, {B i } j i=1 .We can choose the particular j elements in x j ways, and we can order the sets {B i , } j i=1 in j! ways.Thus there are n j x j j! = n j (x) j such functions.Now using (1.8) and the fact that (k) j = 0 for j > k, we can write the nth moment µ n;λ of a Pois(λ)-distributed random variable as (1.9) The Touchard polynomials T n (x), n ≥ 0, are defined by Thus, (1.9) gives the formula (1.10) Since we conclude from (1.10) that (1.11) More generally, we consider the Ewens sampling distributions, P n;θ , θ > 0, on S n as follows.Let N (n) (σ) denote the number of cycles in the permutation σ ∈ S n , and let s(n, k) = |{σ ∈ S n : N (n) (σ) = k}| denote the number of permutations in S n with k cycles.It is known that the polynomial n k=1 s(n, k)θ k is equal to the rising factorial θ (n) , defined by . For θ > 0, define the probability measure P n;θ on S n by Of course, P n;1 reduces to the uniform measure P n .The following theorem is well-known.
Theorem C.Under P n,θ , the random vector (C for all m ≥ 1 and j 1 , . . .j m ∈ Z + . Theorem C can be proved using moment generating functions; see for example, [1], [11].We will use the method of moments to give a new and perhaps simpler proof of Theorem C.More significantly, it which will give intuition for (1.10), or equivalently, for (1.11); that is for the connection between the moments of Poisson random variables and Touchard polynomials.
The key to this connection comes by representing the random variables as sums of indicator random variables.We note that, at least in the case of uniformly distributed permutations, one can ostensibly find a different proof by the moment method in the literature [2], but that proof seems more complicated than ours, and more importantly, doesn't provide the above-noted intuition.The paper [2] gives a formula from [13] on factorial moments, and then states that with this formula the proof follows by the moment method.
However, the derivation of that formula in [13] is not explicit and seems more complicated than our proof.
We now consider a permutation statistic that hasn't been studied much.
(Indeed, it was only after completing the first version of this paper that we were directed to any papers on this subject.)For σ ∈ S n and A ⊂ [n], define m (σ) denote the number of sets of cardinality m that are fixed by σ. (Note that 1 (σ), the number of fixed points of σ.)A little thought reveals that (1.15) For example, if σ ∈ S 9 is written in cycle notation as σ = (379)(24)(16)(5)(8), then F Remark.Note that For k, m ∈ N, consider the multiset consisting of m copies of the set [k].
A collection {Γ l } r l=1 such that each Γ l is a nonempty subset of [k], and such that each j ∈ [k] appears in exactly m from among the r sets {Γ l } r l=1 , is called an m-cover of [k] of order r.Denote the total number of m covers of [k], regardless of order, by v k;m .Note that when m = 1, we have v k;1 = B k , the kth Bell number, denoting the number of partitions of a set of k elements.Also, it's very easy to see that v 1;m = 1 and v 2,m = m + 1.

By calculating directly the moments of F
(n) m , we will prove the following theorem.
Remark.It is natural to suspect that F m converges weakly to 0 as m → ∞; that is, lim m→∞ P (F m ≥ 1) = 0.This is in fact a hard problem.In [9] it was Thus indeed, F m converges weakly to 0 as m → ∞.A lower bound on P (F m ≥ 1) of the form A log m m was obtained in [6].These results were dramatically improved in [10] where it was shown that P (F m ≥ 1) = m −δ+o (1) as m → ∞, where δ = 1 − 1+log log 2 log 2 ≈ 0.08607.And very recently, in [8], this latter bound has been refined to denote the exponential generating function of the sequence {v k;m } ∞ k=1 .Of course, by Theorem 1 V m is also the moment generating function of the random variable F m : V m (x) = Ee xFm .Using (1.16) and Theorem 1, we will give an almost immediate proof of the following representation theorem for V m (x).We use the notation [z m ]P (z) = a m , where Theorem 2. (1.17) where Remark.When m = 2, 3, the above formula reduces to The formula for m = 2 was proved by Comtets [4] and the formula for m = 3 was proved by Bender [3].The case of general m was proved by Devitt and Jackson [5].They also prove that there exists a number c such that the extraction of the coefficient v k;m from the exponential generating function V m (x) can be done in no more than ck m log k arithmetic operations.
In section 2 we will give our new proof of Theorem C via the method of moments.In section 3 we prove Theorems 1 and 2.

A proof of Theorem C via the method of moments
If a sequence of nonnegative random variables {X n } ∞ n=1 satisfies sup n≥1 EX n < ∞, then the sequence is tight, that is, pre-compact with respect to weak convergence.Let X be distributed as one of the accumulation points.
then the sequence {µ k } ∞ k=1 uniquely characterizes the distribution [7].We conclude then that if a sequence of nonnegative random variables {X n } ∞ n=1 satisfies (2.1) and (2.2), then the sequence is weakly convergent to a random variable X satisfying EX k = µ k .
An extremely crude argument shows that the Bell numbers satisfy By (1.10), the kth moment ) is bounded from above by T k (θ), for all m ≥ 1, and T k (θ) ≤ B k max(1, θ k ).Thus, in light of (2.3) and the previous paragraph, if we prove that (2.4) lim where E n;θ denotes the expectation with respect to P n;θ , then we will have proved that C (n) m under P n;θ converges weakly to Z θ m , for all m ∈ N.And if we then prove that then we will have completed the proof of Theorem C.
We first prove (2.4).In fact, we will first prove (2.4) in the case of the uniform measure, P n = P n;1 .Once we have this, the case of general θ will follow after a short explanation.Assume that n ≥ mk.For D ⊂ [n] with |D| = m, let 1 D (σ) be equal to 1 or 0 according to whether or not σ ∈ S n possesses an m-cycle consisting of the elements of D. Then we have

and
(2.7) (Here we have used the assumption that n ≥ mk, since otherwise n − ml will be negative for certain l ∈ [k].)The number of ways to construct a collection of l disjoint sets {A i } l i=1 , each of which consists of m elements from [n], is n! (m!) l (n−lm)!l! .Given the {A i } l i=1 , the number of ways to choose the sets {D j } k j=1 so that {D j } k j=1 = {A i } l i=1 is equal to the Stirling number k l , the number of ways to partition a set of size k into l nonempty parts, multiplied by l!, since the labeling must be taken into account.From these facts along with (2.7) and (2.8), we conclude that for n ≥ mk, (2.9) proving (2.4) in the case θ = 1.
For the case of general θ, we note that the only change that must be made in the above proof is in (2.8).Recalling that s(n, k) denotes the number of permutations in S n with k cycles, we have (2.10) Thus, instead of (2.9), we have ), as n → ∞.
We now turn to (2.5).The method of proof is simply the natural extension of the one used to prove (2.4); thus, since the notation is cumbersome we will suffice with illustrating the method by proving that = 0 if and only if for some l 1 ∈ [k 1 ] and some The number of ways to construct disjoint sets , the number of ways to choose the sets {D .

Proofs of Theorems 1 and 2
Proof of Theorem 1.Since F where the random variables {Z θ m } ∞ m=1 are independent, and Z θ m has the Pois( θ m )-distribution:

m
converges weakly to F m , it follows from the discussion in the first paragraph of section 2 that it suffices to show that(3.1)limn→∞ E n (F (n) m ) k = v k;m .Let n ≥ km.For D ⊂ [n], let 1 D (σ) equal 1 or 0 according to whether or not σ ∈ S n induces an embedded permutation on D.There is a one-to-one correspondence between collections {D j } k j=1 , satisfyingD j ⊂ [n]and |D j | = m, and collections {A I } I⊂[k] of disjoint sets satisfying A I ⊂ [n] and satisfying (3.4) I:i∈I l I = m, for all i ∈ [k], where (3.5) l I = |A I |, I ⊂ [k].The correspondence is through the formula (3.6)A I = ∩ i∈I D i ∩ ∩ i∈[k]−I ([n] − D i ) , I ⊂ [k].
(σ) denote the number of cycles of length m in σ.Let P n denote the uniform probability measure on S n .We can now think of σ ∈ S n m If for some k ∈ N, lim n→∞ EX k n exists and equals µ k , and sup n≥1 EX k+1