Distribution of the sum-of-digits function of random integers: a survey

We review some probabilistic properties of the sum-of-digits function of random integers. New asymptotic approximations to the total variation distance and its refinements are also derived. Four different approaches are used: a classical probability approach, Stein's method, an analytic approach and a new approach based on Krawtchouk polynomials and the Parseval identity. We also extend the study to a simple, general numeration system for which similar approximation theorems are derived.

"Whatever its mathematical virtues, the literature on sums of digital sums reflects a lack of communication between researchers." In view of the large number of independent discoveries it is likely that we missed some papers in our attempt to give a more complete collection of relevant known results.
In addition to reviewing known stochastic properties of X n , we will present new approximations to the distribution of X n . For simplicity, we focus on the binary case q = 2, leaving the straightforward extension to other numeration systems to the interested reader. In particular, our results imply that the total variation distance between the distribution of X n and a binomial random variable Y λ of parameters λ := log 2 n and 1 2 is asymptotic to (see Figure 2) where, interestingly, The function F is a bounded, periodic function (namely, F (x) = F (x + 1)) with discontinuities at integers; see (19) for the definition of F for arbitrary x, and Figure 3 for a graphical rendering. We see that, up to an error of order (log 2 n) −1/2 , the total variation distance is essentially asymptotic to the absolute difference between the mean and λ/2. Finer approximations will also be derived.
Four different proofs will be given for clarifying the total variation distance, and each has its own generality; these include an elementary probability approach, Stein's method, Fourier analysis, and a new Krawtchouk-Parseval approach. Indeed, these approaches easily extend to the consideration of more  general frameworks, a simple one being briefly considered that applies in particular to the number of ones in the binary-reflected Gray codes.

First moment of X n
The mean of X n is essentially the partial sum of ν q (j) S q (n) := nE(X n ) = 0 j<n ν q (j), which, by the relation ν q (qj + r) = ν q (j) + r for 0 r < q, satisfies the following recurrence with S q (n) = 0 for n 1. In particular, when q = 2, this recurrence has the form S 2 (n) = S 2 n 2 + S 2 n 2 + n 2 .
For many other recurrences for S 2 (n), see [100]. Interestingly, the quantity S 2 (n) appeared naturally in a large number of concrete applications and is given as A000788 in Sloane's Encyclopedia of Integer Sequences. A partial list when q = 2 is given as follows.
• The number of bisecting strategies in certain games [53]; • Linear forms in number theory [87]; • Determinant of some matrix of order n [23]; see also [71] for an extension to q 2; • Bounds for the number of edges in certain class of graphs [60,63,101]; • The solution to the recurrence f (n) = max k {f (k) + f (n − k) + min{k, n − k}} with f (1) = 0 is exactly S 2 (n); concrete instances where this recurrence arise can be found in [61, §2.2.1] and [63,100]; see also [1,101]; • The number of comparators used by Batcher's bitonic sorting network [68]; • External left length of some binary trees [86]; • The minimum number of comparisons used by top-down recursive mergesort [46]; bottom-up mergesort [111]; queue-mergesort [21]; • The number of runs for the output sequence or recursive mergesort with high erroneous comparisons; see [62].
This list of concrete examples, albeit nonrandom in nature, shows the richness and diversity of the sum-of-digits function. Legendre, in his Théorie des nombres whose first edition was published in 1798, derived the relation ν q (n) = n − (q − 1) j 1 n q j ; see [85,Tome I,p. 12]. This relation has proved useful in establishing many properties connected to ν q (n), including notably the identity (5) below. On the other hand, since j 1 n/q j equals the q-adic valuation of n! (namely, the largest power of q that divides n!), the above relation has also been widely used in the q-adic valuations of many famous numbers. For an extension of the right-hand side, see [112].
Bush proved his formula by providing upper and lower bounds for the sum m<n ε j (m) using the periodicity of ε j : ε j (m + q j+1 ) = ε j (m). In particular, when q = 2, ε j (m) is a sequence starting with a series of 2 j zeros followed by 2 j ones. His estimates imply indeed a more precise result (see Figure 4 for q = 2) where the O-term is optimal. Note that this estimate can also be derived easily from d'Ocagne's expression (2) by observing that the sum 1 2 (q−1) 0 j λ ε j jq j provides the major contribution, the others being of order O(n).
Bellman and Shapiro [11] were primarily concerned with the binary representation q = 2 and provided an independent proof of Bush's result They use two different proofs (one by generating functions and Tauberian theorems and the other by recurrence) and briefly mention in a footnote that the remainder can be improved to O(1). The same paper also initiated a very important notion called "dyadically additive", which has later on been fruitfully extended and explored mostly under the name of q-additivity (and its multiplicative counterpart q-multiplicativity); see [29,52,124] for the early publications and [97] and the papers cited there for more recent developments.
Mirsky [102], following [11], proved that His simple, half-page proof is based on the decompositions where f (n, , r) denotes number of integers 0 j < n such that ε (j) = r. Then (4) follows from the simple estimate f (n, , r) = n/q + O(q ). Mirsky's result was independently re-derived by Cheo and Yien [22] and Tang [134] (judged to be virtually identical to [22] in MathSciNet), and referred to as Cheo and Yien's theorem in [25,74]. Cheo and Yien proved additionally in [22] a theorem for the density of X n of the form for each finite m 0. Drazin and Griffith [37] studied the sum of integer powers of the digits and derived estimates similar to (4). They also commenced the study of more precise numerical bounds for the O(1)-term in (4), which was followed later in [23, 45, 49-51, 100, 123, 139]. In particular, no mention is made in [23,45,100,123] of known results for the O(1)-term in (4), and in particular the bounds derived in [45] about half a century later are weaker than those in [37].
The next stage of refinement was accomplished by Trollope in 1968 where he showed that the O(1)-term in (4) is indeed a periodic function when q = 2 for which an explicit expression is also given. His proof is based on d'Ocagne's formula (3), which he derived in [138] in a more general setting.
Delange [30] made an important step towards the ultimate understanding of the underlying periodic function. He extended Trollope's result to any base q 2 and showed, by a very simple, elegant, elementary proof, that (see Figure 4 for q = 2): where F 1 (x) = F 1 (x + 1) is a continuous, periodic, and nowhere differentiable function (see also [135,137]). His expression for F 1 is as follows; see Figure 5 for a plot of −F 1 (x) and its first few approximations by 1 2 log 2 n − E(X n ).
where {x} denotes the fractional part of x and g(x) is a Takagi function [64,84] g with the 1-periodic function h defined by (see Figure 6) Furthermore, the Fourier series expansion of F is also computed; see also [47] for a systematic approach by analytic means. Delange's proof is based on the simple observation that His paper [30] has since become a classic and has stimulated much recent research on various themes related to digital sums and different numeration systems; also different asymptotic tools have been developed.
In particular, the Trollope-Delange formula (5) for E(X n ), which is not only an asymptotic expansion but also an identity for all n 1, is not exceptional but a distinguishing feature of many digital sums; see below and [47,58,135] for more examples.
1.2. Beyond the mean: Variance, higher moments and limit distribution of X n The first paper dealing with the distribution of X n beyond the mean value is by Kátai and Mogyoródi [73] in 1968. They derived the asymptotic normality of X n with a rate of the form where Φ denotes the standard normal distribution function and the variance is implicit in their proof, namely, Their approach consists in decomposing X n into sums of suitable number of independent random variables, each assuming the values {0, 1, . . . , q − 1} with equal probability. See (30) below for the binary case. About a decade later, Diaconis [31] obtained, by Stein's method, an optimal Berry-Esseen bound for q = 2 of the form (see Figure 7) he also proved that (q = 2) Moments of X n . Stolarsky [131], in addition to giving a wide list of references, carried out a systematic study of the asymptotics of the moments of X n when q = 2; in particular, he proved that for any positive integer m. The O-term is however too weak to obtain a more precise asymptotic approximation to the central moments of X n of order 2. Later Coquet [26] in 1986 improved Stolarsky's result by providing a formula of the Trollope-Delange type (q = 2) 2 (n) with j < m; then an induction is used. In particular, his result for the second moment implies the identity where F 2 is bounded, continuous and periodic of period 1; see Figure 8. Coquet [26] mentioned that the function F 2 is nowhere differentiable and his proofs extend to any q-ary base. An independent proof of the above identityà la Delange was given later by Kirschenhofer [75]; see also Osbaldestin [110] for an interesting discussion of several digital sums, as well as an alternative expression for F 2 .
On the other hand, Coquet's expressions for the F m,j 's (except for F 2 ) are nonconstructive; see [59] for the third moment. Dumont and Thomas [42] studied the moments of X n in a general framework and derived more explicit expressions for F m,j , as well as properties such as continuity and nowhere differentiability. Their approach relies on substitutions on finite alphabet and matrix analysis; see [41]. In addition to the moments, they also considered in the same paper [42] the moments of X n − 1 2 (q − 1) log q n and showed that where theF m,j 's are continuous, 1-periodic, and nowhere differentiable functions. These estimates imply of course the asymptotic normality of X n by the method of moments, which Dumont and Thomas later established in [43] (in a more general framework). Other constructive expressions, together with interesting functional properties, are derived by Okada et al. [107], based on binomial measures; see also [104] and the recent paper [81]. Extensions of the same approach to cover the moments of X n for any q 2 were carried out in [104,105], the required tools being developed in [109].
Unaware of Stolarsky's and Coquet's results, Kennedy and Cooper considered the cases when q = 10: m = 2 in [74] and any positive integer m in [24] but with a non-optimal error term in the corresponding expression of (9) for q = 10; see also [14]. The optimal error term follows indeed from Dumont and Thomas's result in [42] (see also [104]) and was later re-proved by Yu in [142] (see also [16] for an extension).
A general procedure, based on the classical approach of Dirichlet series and Mellin-Perron integral formula (fully discussed in [47]), was developed in [58] and leads to absolutely convergent Fourier series expansions for G m,j . The approach there can be easily extended to q-ary case.
Probability generating function of X n . By definition, the probability generating function of X n is given by The special cases when q = y = 2 appeared as the total number of odd numbers of j i for 0 i j < n, a result derived by Glaisher [55] in 1899; earlier results of similar character can be found in the papers by Kummer [82] and by Lucas [89]. For another interesting occurrence in cellular automata, see [44,140,141].
The distribution of X n is closely connected to the notion of q-additive and qmultiplicative functions, first introduced by Bellman and Shapiro [11], and later systematically investigated by Gel'fond [52] and Delange [29]; see also [4,97] and the references cited there. We did not find a more complete survey on q-additive or q-multiplicative functions but a simple search on MathSciNet resulted in more than 152 papers (as of July 1, 2014); see [12,Ch. 9] and [4].
Okada et al. [104,108] later gave more explicit expressions for the periodic function G by multinomial measures. A different approach was provided in [81]. A Fourier expansion for q = 2 was given in [58], which is absolutely convergent when √ 2 − 1 < y < √ 2 + 1. The closed-form expression (12) contains much information; for example, the d'Ocagne's formula (2) follows from (12) by taking derivative with respect to y = 1 and then substituting y = 1. We will see later that (12) is also helpful in proving effective approximations for distances between X n and some binomials.

Asymptotic distribution of sum-of-digits function
We mentioned Kátai and Mogyoródi's [73] and Diaconis's [31] Berry-Esseen bounds for X n . We group here known results concerning limit and approximation theorems for X n according to the major approach used, focusing mostly on the case q = 2 for simplicity of presentation and comparison. See Table 1 for a summary. Table 1 A summary of known approaches leading to the asymptotic normality of Xn; here CLT denotes "central limit theorem" and LLT "local limit theorem"

Classical probabilistic approach
Kátai and Mogyoródi's approach uses elementary probability tools and relies their Berry-Esseen bound (7) on the following decomposition (for q = 2) which follows immediately from (13). Here Y j denotes the sum of j independent Bernoulli indicators, each assuming 0 and 1 with equal probability 1/2. The identity (14) implies that the random variable X n is itself a mixture of independent binomial distributions. The remaining proof then proceeds along standard classical lines (by using estimates for sums of independent random variables). Heppner [65] later proved, in the same spirit, a simple Chernoff-type inequality for X n when q = 2 (λ = log 2 n ) since the right-hand side of this inequality decreases exponentially as C grows, one concludes that ν 2 (m) is close to (λ + 1)/2 for most m < n. This observation is useful in establishing precise estimates for sums of the form m∈Bn ν 2 (m), where B n is an arbitrary subset of nonnegative integers < n. For results concerning the distribution of ν q (n) for given subsequences of integers (such as prime numbers and squares), see [98,99] and the references therein. Similar estimates will be used below.
A central limit theorem for the distribution of the values assumed by the sequence ν 2 (3n) − ν 2 (n) was derived by Kátai [72], while the corresponding local limit theorem was given independently by Stolarsky [132]. The proof of Stolarsky's local limit theorem starts from matrix generating functions, obtaining a closed-form expression, and then applies the saddle-point method for the corresponding sum. Schmidt [121] then proved, motivated by Stolarsky's [132] result, a multidimensional central limit theorem (the joint distribution of the values of ν 2 (K 1 n), . . . , ν 2 (K d n) for odd numbers K 1 , . . . , K d ) using tools from Markov chains. The intuition behind such a limit law is that the two events ν 2 (K 1 U n ) and ν 2 (K 2 U n ) are more or less independent, where U n ∼ Uniform[0, n − 1] and K 1 , K 2 are odd numbers. Dumont and Thomas [43] use again Markov chains and large deviations to characterize the asymptotic distribution of a class of digital sums (covering in particular X n ) associated with substitutions, a Berry-Esseen bound being also derived.

q-multiplicative functions
Since ν q (n) is q-additive, the function e itνq(n) is q-multiplicative. The distribution of the values of q-additive functions has been widely studied in the number-theoretic literature. We mention briefly an early result. Delange [29] showed that for any q-multiplicative function f with |f | 1 and (see [94]) This result roughly says that the mean value of q-multiplicative functions with bounded modulus is close to some multinomial distribution.
In particular, if one applies formally this result to f (n) = e itνq(n) , then the left-hand side corresponds to the characteristic function of X n , while the dominant term on the right-hand side to a multinomial distribution. We cannot however conclude directly from this result that X n is asymptotically multinomially distributed due to lack of uniformity in t. For asymptotic normality and related results for q-additive functions, see [9,10,93,130] and [12,Ch. 9].

Stein's method
Stein's method is a method of probability approximation invented by Charles Stein in 1972 [128]. It does not involve Fourier analysis but hinges on the solution of a functional equation. In a nutshell, Stein's method can be described as follows. Let W and Z be random variables. In approximating the distribution L (W ) of W by the distribution L (Z) of Z, the difference between E(h(W )) and E(h(Z)) for a class of functions h is expressed as where L is a linear operator and f h a bounded solution of the equation The error E(L[f h ](W )) is then bounded by studying the solution f h and exploiting the probabilistic properties of W . The operator L has the property that E(L[f ](Z)) = 0 for a sufficiently large class of f and therefore charac- for Poisson approximation, that is, if Z has the Poisson distribution with mean λ [17]. The operator L is not unique. It can be chosen to be the generator of a Markov process whose stationary distribution is the approximating distribution L (Z). This generator approach to Stein's method is due to Barbour [5,6]. which implies (8) where λ 0 := log 2 n . In his book [129], Stein considered binomial approximation for X n , and using the equation for f defined on {0, 1, . . . , k}, he obtained see also [67]. By using the generator approach, Loh [88] extended the binary expansion for X n to q-ary expansion for any base q 2, and proved that where X n denotes the q-dimensional random vector whose i-th component is the number of the i-th digit in the q-ary expansion and Z ∼ Multinomial( log q n , 1/q, . . . , 1/q).
Barbour and Chen [7] also used the generator approach to improve the error bound in the binary expansion case to 1/λ if the approximating binomial distribution Y λ0 is replaced by Binom(2e(n), 1 2 ), where e(n) is the mean of X n or by a mixture of Y λ0−1 with either Y λ0 or Y λ0−2 chosen to have mean e(n).

Generating functions and analytic approach
Schmid [120] derived, improving earlier results by Stolarsky [132] and by Schmidt [121], a very precise multidimensional local limit theorem of the form where His proof builds on matrix generating functions and uses tools from Markov chains, following Stolarsky and Schmidt. In addition to providing optimal convergence rate for the corresponding multidimensional central limit theorem (derived in [121]), his result implies very tight estimates for the distribution of ν 2 (kn) − ν 2 (n), a problem receiving much attention in the literature; see the recent paper [27], the Ph.D. Dissertation [130] and the references therein.
In particular, (16) also leads to a local limit theorem for X n with optimal rate when q = 2.
Drmota and Gajdosik [40] use generating functions and complex-analytic method to prove a local limit theorem for the sum-of-digits function in more general numeration systems; see the paper by Madritsch [92] and the references cited there for more recent developments.

Other approaches
We mentioned the result (11) by Dumont and Thomas [42] for the central moments of X n , which implies the asymptotic normality of X n by the Frechet-Shohat moment convergence theorem.
The same method of moments was later applied by Bassily and Kátai [9] to derive the asymptotic normality of q-additive functionals; see also [54,91,92].

New results
We will derive a few approximation theorems for the distribution of X n ; different approaches will be developed, each having its own advantages and constraints. In particular, an expansion for a refined version of the total variation distance will be given, which will cover (1) as a special case.
Here and throughout this paper, we consider only the case q = 2 for simplicity. The following notations will be consistently used. Let λ = λ 1 = log 2 n . We Let H m (x) denote the Hermite polynomials Define a sequence {h m } by Theorem 2.1. Let X n denote the number of 1s in the binary representation of a random integer, where each of the integers {0, 1, . . . , n − 1} is chosen with equal probability P(X n = m) = 1/n. Then (13)) and ∆ denotes the difference operator Two explicit expressions for a r (n) are as follows.
for r = 0, 1, . . . . Taking m = 1 and dividing the left-hand side by 2, we obtain an asymptotic approximation to the total variation distance; see [20,125] for similar results.
Corollary 2.2. The total variation distance between the distribution of X n and that of the binomial random variable Y λ satisfies where the bounded periodic function F is defined by for x ∈ (0, 1], when writing 2 x = j 0 2 −dj ∈ (1, 2] with 0 = d 0 < d 1 < · · · , and F (x + 1) = F (x) for other values of x.
Proof. Observe first that a 0 (n) = 1 and by the definition of F which has the form On the other hand, since H 1 (x) = x, we then get h 1 = 2/π. By considering the values of we see that F is continuous except at the end points (integers).
Taking m = 2, we obtain a refined estimate with smaller errors.
where F 2 (x) is defined for x ∈ (0, 1] by by writing 2 x = j 0 2 −dj as above, and F 2 (x + 1) = F 2 (x) for other values of x (see Figure 11 for a plot). The two functions F (x) and F 2 (x) may assume the value zero when x is not an integer; see Figures 10 and 11. This means that in such cases the error term is of a smaller order, and the right-hand side of our result gives simply an Oestimate. One naturally wonders if there are other simple uniform approximants for the total variation distance. We propose a simple one in the following.
This result is similar to the estimate proved by Soon [125] (see also [20]), where he considered the distance d TV (L (X n ), L (Y λ+1 )) instead of d TV (L (X n ), L (Y λ )) by using Stein's method.
We see roughly that the wider the gap between λ and λ 2 , the smaller the total variation distance is.

Proofs
We first prove Theorem 2.1 by a direct analytic approach based on Fourier analysis. A closely connected semigroup approach (first developed by Deheuvels and Pfeifer for Poisson distribution; see [28]), but relies on more algebraic formulations and manipulations, can also be used for the same purpose; see [117]. Then Theorem 2.4 is proved by two different approaches: one by Stein's method, and the other by a standard probability argument, which starts from decomposing the distribution of X n into a sum of binomial distributions. Our adaptation of Stein's method indeed leads to a refinement of Theorem 2.4, which will be given in Section 3.3. For more methodological interests, we also include another approach using the Krawtchouk polynomials and the Parseval identity, which is the binomial analogue of the Charlier-Parseval approach developed earlier in detail in [143].

Analytic approach: Proof of Theorem 2.1
We now prove Theorem 2.1 and write the proof in a more general way that can be readily amended for dealing with other cases such as Gray codes; see Section 4.2 below.
In terms of Q n , the relation (24) has the form Q 2n (y) = (1 + y)Q n (y).
These two recurrences can be written as Q n (y) = (1 + y)Q n/2 (y) + δ n y ν2( n/2 ) , for all n 0, where By iteration, we then get for any n 1; compare (13). This means that P n has the form and ρ j are nonnegative integers such that ρ j j and |δ j | 1.

Local expansion of φ n (y)
The approach we use here relies on the intuition that if φ n is sufficiently "smooth" then X n is close to the binomial distribution Y λ . More precisely, let φ n (y) = if |y − 1| 1/2 − ε, ε > 0 being an arbitrarily small number.
Proof. We indeed prove a stronger estimate n 2 λ |a r (n)| 3 · 2 r−1 , for all r 1, which then implies (26). Let [y r ]f (y) denote the coefficient of y r in the Taylor expansion of f . Since ρ j j, we have, for r 1, as required.

3.1.
3. An asymptotic expansion for P(X n = k) Proposition 3.2. For all integer 0 r λ and each m 1, we have Proof. By Cauchy's integral formula for the coefficient of an analytic function, we have P(X n = k) = 1 2πi |y|=1 y −n−1 P n (y) dy Since e i/2 lies inside the circle |y − 1| = 1/2, we evaluate I 1 by applying Lemma 3.1 and obtain The integral in the O-term is then estimated as follows.
Now substituting this estimate into the expression of I 1 and using the relation we obtain On the other hand, since (by (13) Choosing c 0 = log 2 log 2 + 1/(8π 2 ) , so as to balance the two terms in the O-symbol, we obtain where c 0 = log 2 1 + 8π 2 log 2 . Thus This proves the proposition.

Estimates for the differences of binomial coefficients
Lemma 3.3. For r 0, we have where h r is defined in (17).
Proof. By (28) and an analysis similar to that used in (27) For the proof of (29), we apply the standard saddle-point method and obtain proving (29) by (17).
Note that when r = 1, we have the closed form expression For higher values of r, a closed-form expression can be derived for 0 k λ |∆ r λ k | in terms of the zeros of Krawtchouk polynomials; see [117] for r = 2.

Proof of Theorem 2.1
Proof. Applying Proposition 3.2, we get Note that the sum over all k for the terms corresponding to r = m + 1 is of order λ −(m+1)/2 . Theorem 2.1 then follows from (29).
We will later formulate a simple framework of numeration systems for which the same type of results as X n hold, using the same method of proofs.

Elementary probability approach: Proof of Theorem 2.4
A crucial observation that will be elaborated here is the fact that X n is itself a mixture of binomial distributions. More precisely, by the decomposition of Kátai and Mogyoródi (14), A direct probabilistic proof of the above relation is as follows. Suppose U n is a uniformly distributed of [0, n − 1], then by definition X n = ν 2 (U n ). First, we have the relation On the other hand, since ν 2 (2 r + j) = 1 + ν 2 (j) if 0 j < 2 r , we also have We can now split the interval {0, 1, . . . , n − 1} = for 1 j s − 1. Clearly, P(U n ∈ A j ) = 2 λj+1 /n. We then obtain (30). We group in the following lemma a few simple properties of the total variation distances involving Y k , which will be needed later.
Lemma 3.4. Let Y k be a binomial random variable with mean parameters k and 1/2. Then Proof. Since P(Y k = j) increases monotonically in the interval from 0, k/2 and decreases monotonically in the interval k/2 , k , we have In a similar way, since P This proves the lemma.

Proof of Theorem 2.4 when λ − λ 2 √ λ
Consider first the case when λ − λ 2 √ λ. By (30) and Lemma 3.4, we have giving an upper bound for the total variation distance.
To obtain a lower bound, we apply again (32) and Lemma 3.4.

Proof of Theorem 2.4 when
for ε > 0, by the central limit theorem of the binomial distribution (with rate).
Combining the upper-and the lower-bounds, we get if c λ − λ 2 √ λ and c is sufficiently large.

Proof of Theorem 2.4 when
The lower bound becomes less precise if (λ − λ 2 )/ √ λ → ∞. In this case, we first observe that the total variation does not exceed 1; thus Applying Chebyshev's inequality, we get On the other hand, when λ 3 = λ 2 − 1, we use (32) and get This completes the proof of Theorem 2.4.

Stein's method: An alternative proof of Theorem 2.4
The sum-of-digits function was among one of the first instances used to demonstrate the effectiveness of Stein's method (see [31,129]) with an optimal approximation rate. This method centers on exploiting an equation that characterizes the limiting measure, which, in the case of binomial distribution, is given by (15) and can be derived in the following way.

Stein's equation for binomial distribution
Since Y k is binomially distributed with parameters k and p ∈ (0, 1), we see that the probabilities P(Y k = j) satisfy the difference equation Following Stein's idea [128] for deriving the characteristic equation for the normal distribution by using integration by parts, we consider the average and apply summation by parts, which yields, by (33), Thus the identity holds for any function g : {0, 1, 2, . . . , k} → R. A simpler proof of (34) starts with the relation multiply both sides by g(j), and sum over all indices j, giving rise to which is nothing but (34). Conversely, if for some discrete random variable Z the identity E qZg(Z − 1) + p(Z − k)g(Z) = 0 holds for any function g(j), then the probabilities P(Z = j) satisfy the equation (35) as P(Y k = j). Thus Distribution of the sum-of-digits function 209

Binomial approximation
In the special case when p = q = 1/2, we have Thus we expect that the above quantity will be small for any random variable whose distribution is close to Binom(k, 1/2). Assume h : {0, 1, . . . , λ} → C is an arbitrary function. Let g be a solution to the recurrence relation Then we can represent the difference of means as (37), the expectation on the right-hand side of the above identity will be zero if X n were distributed according to binomial distribution B(λ, 1/2). Thus we expect that this quantity will be small if the distribution of X n is close to B(λ, 1/2).

By Stein's equation
Recalling that A j is defined in (31), we see that P(X n < x|U n ∈ A j ) = P(Y λj + j − 1 < x). It follows that The term with j = 1 in the last sum is zero since Y λ is binomially distributed. Hence, with g j (x) := g(x + j − 1), we then obtain But the second sum is identically zero by (36). It follows that which can be alternatively rewritten as where Q 1 is a random variable taking value λ j − λ + j − 1 if U n ∈ A j and Q 2 takes value j − 1 if U n ∈ A j for 1 j s. Note that and, similarly, E(Q 2 ) = O 1/2 λ−λj .

Solving the equation
Solving the equation (37) is equivalent to finding the solution x m of the difference equation for 1 m k (note that x −1 and x n do not affect the solution of this equation and therefore can be assumed to be equal to zero), where the δ m 's are given and satisfy the condition The solution is obtained by introducing new variables z m = (k − m) k m x m for which our difference equation takes form Iterating this, we obtain the following solution to Stein's equation (37).
Note that Proof. If m λ/2, then, by the monotonicity of the sequence y m , we obtain The case when m > λ/2 is treated similarly. Indeed, if m > λ/2 , then, using the identity

Proof of Theorem 2.4 by Stein's method
Assume now that A ⊂ R is an arbitrary set. Define Then

A refinement of Theorem 2.4
A finer result can be obtained by using the following lemma. Proof. If m λ/2, then By the elementary inequality we see that In a similar way we obtain the estimate in the case when m > λ/2.
The following result is similar in nature to that obtained by Soon [125] for unbounded function h(x) that he later applied to derive several large and moderate deviations results for X n . Proposition 3.9. Assume h is any real function such that 0 h(x) 1. Then where a 1 (n) = F (log 2 n) is defined in (20).
Proof. The lemma implies that g(x + j − 1) = g(x) + O(j/k). Since Y k+s has the same distribution as Y k + W s , where W s is independently and binomially distributed W s ∼ B(s, 1/2), we can replace the mean E(g(Y k+s )) by E(g(Y k + W s )), the error so introduced being bounded above by where we used the estimate |W s | s. Thus We now evaluate the quantity E(g(Y λ )) appearing in the last expression where we use the convention that b m=a = − a m=b . We then split the sum into two parts and obtain If λ/2 r λ/2 + λ 3/4 , then The same estimate holds when r lies in the range λ/2 − λ 3/4 r λ/2. Thus This proves the proposition.
3.3.6. Corollaries of Proposition 3.9 Corollary 3.10. We have Proof. By the definition of the total variation distance where the supremum is taken over all functions h assuming only binary values {0, 1}. It is easy to see that the supremum of the average containing h in the above relation is reached by the function and we thus get, by Proposition 3.9, the estimate Corollary 3.11. For all c λ − λ 2 with c large enough, we have Proof. This follows from the estimate (39) because and the quantity |a 1 (n)| can be bounded from bellow by Remark. The Stein method we adopted above for the analysis of sum-of-digits function differs from the original approach by Stein [129]. First, he used Y λ+1 in lieu of Y λ , as a good approximation to X n , and derived a uniform bound for the point probability. Second, instead of exploiting the fact that X n is a mixture of binomial distributions his analysis is more subtle and is based on the construction of an exchangeable pair. By doing so he managed to simplify the right-hand side of (37) to where Q is a random variable such that 0 E(Q) 2 and g h is the solution to the recurrence equation for x ∈ {0, 1, . . . , λ} whose precise expression is given by Lemma 3.5 with k = λ + 1. The estimate of Lemma 3.7 together with the property 0 E(Q) 2 now immediately give the estimate of the total variation distance d TV (L (X n ), L (Y λ+1 )) = O(λ −1/2 ). Further applications of similar ideas will be explored elsewhere.

The Krawtchouk-Parseval approach: χ 2 -distance
We examine in this section yet another approach based on properties of the Krawtchouk polynomials and the Parseval identity (or more generally Plancherel's formula). The approach is the binomial analogue of the Charlier-Parseval approach we developed and explored earlier in [143]. We consider only the simplest case of deriving the χ 2 -distance, leaving the extension to other distances to the interested reader, which follows readily from the framework developed in [143].

Krawtchouk polynomials
Krawtchouk (or Kravchuk) polynomials, introduced in the late 1920s, are polynomials orthogonal with respect to the binomial distribution. Over the years, they were frequently encountered in a variety of areas, including combinatorics, number theory, asymptotic analysis, image analysis, coding theory, etc. In probability theory, their appearance is perhaps even more anticipated than in other areas due to the prevalence of binomial distribution, and sometimes without noticing the explicit connection; see Diaconis's monograph [32] for more information and applications. See also [70,90,122,136] for more recent update. Despite the large literature on diverse properties of Krawtchouk polynomials and the high usefulness of the Parseval identity, we did not find application of the corresponding Parseval identity similar to ours; see however [33,103] for a direct manipulation of Fourier integrals.
We start with reviewing the definition of Krawtchouk polynomials and some of their well-known properties (see [133, pp. 35-37]).
Assume that p and q are nonnegative integers such that p + q = 1. Introduce the notation The Krawtchouk polynomials K n (t) = K n (N, t) are defined by Multiplying both sides by B(N, t)z t and summing over all t from 0 to N , we obtain Taking the coefficients of w n on both sides, we get On the other hand, by (41), we have Accordingly, we obtain the orthogonality relation

The Parseval identity for Krawtchouk polynomials
Let F (z) be a polynomial of degree not greater than N . Thus and we have the expansion Taking square of the above identity, multiplying it by B(N, t) and summing the resulting identity with respect to t, we obtain By the definition (41), we deduce that Comparing this identity with (43), we conclude that where c j is defined by

Now by the Parseval identity
Comparing (44) with (43), and using the relation which is crucial for deriving the asymptotics of the χ 2 -distance.

The χ 2 -distance
For any non-negative integer-valued random variables Z and W , the χ 2 -distance is defined by provided that the series on the right-hand side has a meaning. It possesses two important properties. First, its square root upper-bounds the total variation distance Second, it also provides an effective upper bound for the Kullback-Leibler divergence (or information divergence) a very useful measure in information theory and related applications.
Theorem 3.12. The χ 2 -distance between the distribution of X n and the binomial distribution Y λ satisfies Proof. Let where P n is the probability generating function (23) of X n . Then, by (13), We need the elementary inequality for nonnegative integers a, b with a + b 1. Applying this inequality, we get, with p = q = 1/2 and N = λ, Since the function √ u/(1 + u) reaches the maximum at the point u = 1, we see that It follows that .
It is clear that This proves the theorem.
Finer results can be derived by developing similar techniques as those used in [143] for Poisson approximation.

A general numeration system and applications
The properties we studied above can be readily extended to a more general framework of numeration system in which we encode each integer by a different binary string and impose the sole condition that where Z n denotes the number of 1s in the resulting coding string for a random integer, assuming that each of the first n nonnegative integers is equally likely, and I ∼ Bernoulli(1/2). For definiteness, let Z 0 = Z 1 = 0. This simple scheme covers in particular the binary coding of X n above (as can be easily checked) and binary reflected Gray code, which will be discussed in more detail later. Let µ(n) denote the number of 1s in the coding of n in such a numeration system. All our results below roughly say that this numeration system does not differ much from the binary coding although the codings inside each 2 k block can be rather flexible.
Theorem 4.1 (Local limit theorem). Assume that Z n satisfies (46). Then Z n is asymptotically normally distributed: uniformly for x = o(λ 1/6 ), with mean and variance satisfying Here G 1 , G 2 are bounded periodic functions.
For results related to moderate deviations, see [18]. We can derive more precise Fourier expansions for the periodic functions G 1 , G 2 when more information is available.

Sketches of proofs
Most of our analysis is based on the following explicit expression; cf. (13).
Proof. Observe that the crucial condition (46) implies the recurrence the same as (24) for X n . Consequently, we also have, following the same analysis there,  Then P(Z n ∈ A|ζ n = j) = P(Y λj + r j ∈ A), for any A ⊂ R, where r j := µ( n/2 λj − 1) λ − λ j + 1 and r 0 := 0. By the same arguments used above, we see that the identity E h µ(X n ) = 2 j s 2 λj n E (λ j − λ + r j )g(Y λj + r j ) + r j g(Y λj + r j − 1) holds for any function h : R → R, where g is the solution to Stein's equation (37). We skip all details of the proofs as they are almost identical to those for X n .

Gray code
The Gray code is characterized by the property that the codings of any two successive integers differ by exactly one bit. It is named after Frank Gray's 1947 patent, although the same construction had been introduced in telegraphy in the late nineteenth century by the French engineerÉmile Baudot; see Wikipedia's page on Gray code for more information. The coding notion with two neighboring objects differing at one location has turned out to be extremely useful in many scientific disciplines beyond the original communication motivations such as experimental designs, job scheduling in computer systems, and combinatorial generation; see the survey paper [119] and the references therein. The binary reflected Gray code is constructed by reflecting (or mirroring) the first 2 k codings of the first 2 k nonnegative integers and then adding 1 at the beginning for each coding, resulting in the Gray code for the first 2 k+1 nonnegative integers; see Figure 12 for an illustration.
Then, obviously, R 2 k (z) = (1 + z) k , and, by (52), R 2 k +j (z) = R 2 k (z) + z(R 2 k (z) − R 2 k −j (z)) = R 2 k (z)(1 + z) − zR 2 k −j (z) = (1 + z) k+1 − zR 2 k −j (z). From this recurrence relation, we deduce by induction that R 2n (z) = (1 + z)R n (z), for all n 1. Thus the the sum-of-digits function Z n of random integers under the Gray coding satisfies (46), and thus Theorems 47, 4.2 and 4.4 and Corollary 4.3 all hold. In addition to the mean and the variance, all results are new. The mean of Z n was first studied by Flajolet and Ramshaw [48] where more precise characterizations of G 1 (including a Fourier series expansion) are given. A closed-form expression for E(y Zn ) was derived by Kobayashi et al. [79] by singular measures, together with exact expressions for all moments (noncentered).
For other properties related to γ(n) and Z n , see [36,66,76,77,79,80,113,115]. So far, we considered only the goodness of approximations to L (X n ) and L (Z n ) by the binomial distribution Y λ . It is also natural to consider approxi-Binary(2 k ) Binary(2 k ) Binary(2 k ) Binary(2 k ) Reflect Complement (0 → 1;1 → 0) Translate mations of L (Z n ) by L (X n ), and the result is as follows.

Beyond binary and Gray codings
We give here another simple binary coding system for integers satisfying the condition (46). We start with the observation that binary coding can be constructed not only in the usual translation way, but also using reflection and complement (first reflect the whole block of 2 k numbers as in Gray code, and then change every 1 to 0 and every 0 to 1); see Figure 16. We now consider a coding system using translation and complement. Let µ(n) denote the number of 1s in the coding of n. Then by construction µ(2 k + j) = k + 1 − µ(j), for 0 j < 2 k and k 1. From this recurrence, we see that 0 <2 k +j y µ( ) = (1 + y) k + y k+1 0 <j y −µ( ) , and it is straightforward to see that (46) holds in such a coding system. Thus Z n satisfies all properties stated in the beginning of this section.
To obtain other examples for which (46) holds, one may combine more block operations (such as translation, horizontal or vertical reflection, reversal, flip, etc.) and string operations (complement, reversal, cyclic rotation, rewriting, etc.). A simple example is the block translation or reflection followed by any cyclic rotation of each coding (which does not change the number of 1s). Such a coding scheme also satisfies (46).