A scaling law beyond Zipf's law and its relation to Heaps' law

The dependence with text length of the statistical properties of word occurrences has long been considered a severe limitation quantitative linguistics. We propose a simple scaling form for the distribution of absolute word frequencies which uncovers the robustness of this distribution as text grows. In this way, the shape of the distribution is always the same and it is only a scale parameter which increases linearly with text length. By analyzing very long novels we show that this behavior holds both for raw, unlemmatized texts and for lemmatized texts. For the latter case, the word-frequency distribution is well fit by a double power law, maintaining the Zipf's exponent value $\gamma\simeq 2$ for large frequencies but yielding a smaller exponent in the low frequency regime. The growth of the distribution with text length allows us to estimate the size of the vocabulary at each step and to propose an alternative to Heaps' law, which turns out to be intimately connected to Zipf's law, thanks to the scaling behavior.


Introduction
Zipf's law is perhaps one of the best pieces of evidence about the existence of universal physicallike laws in cognitive science and the social sciences. Classic examples where it applies include the population of cities, company income and the frequency of words in texts or speech [1]. In the latter case, the law is obtained directly by counting the number of repetitions, i.e. the absolute frequency n, of all words in a long enough text, and assigning increasing ranks, r = 1, 2, . . ., to decreasing frequencies. When a power-law relation n ∝ 1 r β holds for a large enough range, with the exponent β more or less close to 1, Zipf's law is considered to be fulfilled (with ∝ denoting proportionality). An equivalent formulation of the law is obtained in terms of the probability distribution of the frequency n, such that it plays the role of a random variable, for which a power-law distribution D(n) ∝ 1 n γ should hold, with γ = 1 + 1/β (taking values close to 2) and D(n) as the probability mass function of n (or the probability density of n, in a continuous approximation) [2][3][4][5][6]. Note that this formulation implies performing double statistics (i.e. doing statistics twice), first counting words to get frequencies and then counting repetition of frequencies to get the distribution of frequencies.
The criteria for the validity of Zipf's law are arguably rather vague (long enough text, large enough range, exponent β more or less close to 1). Generally, a long enough text means a book, a large range can be a bit more than an order of magnitude and the proximity of the exponent β to 1 translates into an interval (0.7,1.2), or even beyond that [6][7][8]. Moreover, no rigorous methods have been usually required for the fitting of the power-law distribution. Linear regression in double-logarithmic scale is the most common method, either for n(r ) or for D(n), despite the fact that it is well known that this procedure suffers from severe drawbacks and can lead to flawed results [9,10]. Nevertheless, once these limitations are assumed, the fulfillment of Zipf's law in linguistics is astonishing, being valid no matter the author, style or language [1,6,7]. So, the law is universal, at least in a qualitative sense.
At a theoretical level, many different competing explanations of Zipf's law have been proposed [6], such as random (monkey) typing [11,12], preferential repetitions or proportional growth [13][14][15], the principle of least effort [1,[16][17][18], and, beyond linguistics, Boltzmann-type approaches [19] or even avalanche dynamics in a critical system [20]; most of these options have generated considerable controversy [21][22][23]. In any case, the power-law behavior is the hallmark of scale invariance, i.e. the impossibility to define a characteristic scale, either for frequencies or for ranks. Although power laws are sometimes also referred to as scaling laws, we will make a more precise distinction here. In short, a scaling law is any function invariant under a scale transformation (which is a linear dilation or contraction of the axes). In one dimension the only scaling law is the power law, but this is not true with more than one variable [24]. Note that in text statistics, other variables to consider in addition to frequency are the text length L (the total number of words, or tokens) and the size of the vocabulary V L (i.e. the number of different words, or types).
Somehow related to Zipf's law is Heaps' law (also called Herdan's law [25,26]), which states that the vocabulary V L grows as a function of the text length L as a power law V L ∝ L α with the exponent α smaller than one. However, even simple log-log plots of V L versus L do not show a convincing linear behavior [27] and therefore, the evidence for this law is somewhat weak (for a notable exception see [5]). Nevertheless, a number of works have derived the relationship β = 1/α between Zipf's and Heaps' exponents [2,5,28], at least in the infinitesystem limit [29,30], using different assumptions.
Despite the relevance of Zipf's law, and its possible relations with criticality, few systematic studies about the dependence of the law on system size (i.e. text length) have been carried out. It was Zipf himself [1, pp. 144] who first observed a variation in the exponent β when the system size was varied. In particular, 'small' samples would give β < 1, while 'big' ones yielded β > 1. However, that was attributed to 'undersampling' and 'oversampling', as Zipf believed that there was an optimum system size under which all words occurred in proportion to their theoretical frequencies, i.e. those given by the exponent β = 1. This increase of β with L has been confirmed later, see [25,31], leading to the conclusion that the practical usefulness of Zipf's law is rather limited [25].
More recently, using rather large collections of books from single authors, Bernhardsson et al [32] find a decrease of the exponents γ and α with text length, in correspondence with the increase in β found by Zipf and others. They propose a size-dependent word-frequency distribution based on three main assumptions: (i) The vocabulary scales with text length as V L ∝ L α(L) , where the exponent α(L) itself depends on the text length. Note however that this is not an assumption in itself, just notation, and it is also equivalent to writing the average frequency n = L/V L as n(L) ∝ L 1−α(L) . (ii) The maximum frequency is proportional to the text length, i.e. n max = n(r = 1) ∝ L.
(iii) The functional form of the word frequency distribution D L (n) is that of a power law with an exponential tail, with both the scale parameter c(L) and the power-law exponent γ (L) depending on the text length L. That is Taking c(L) = c 0 L guarantees that n max ∝ L; moreover, the form of D L (n) implies that, asymptotically, n(L) ∝ L 2−γ (L) [24], which comparing to assumption (i) leads to α(L) = γ (L) − 1, so, 0 < α(L) < 1. This relationship between α and γ is in agreement with previous results if L is fixed [2,29,30]. It was claimed in [32] that α(L) decreases from 1 to 0 for increasing L and therefore γ (L) decreases from 2 to 1. The resulting functional form is in fact the same functional form appearing in many critical phenomena, where the powerlaw term is limited by a characteristic value of the variable, c 0 L, arising from a deviation from criticality or from finite-size effects [24,[33][34][35]. Note that this implies that the tail of the frequency distribution is not a power law but an exponential one, and therefore the frequency of most common words is not power-law distributed. This is in contrast with recent studies that have clearly established that the tail of D L (n) is well modeled by a power law [9,36]. However, what is most uncommon about this functional form is the fact that it has a 'critical' exponent that depends on system size. The values of exponents should not be influenced by external scales. So, here we look for an alternative picture that is more in agreement with typical scaling phenomena.
Our proposal is that, although the word-frequency distribution D L (n) changes with system size L, the shape of the distribution is independent of L and V L , and only the scale of D L (n) changes with these variables. This implies that the shape parameters of D L (n) (in particular, any exponent) do not change with L; only one scale parameter changes with L, increasing linearly. This is explained in section 2, while section 3 one is devoted to the validation of our scaling form in real texts, using both plain words and their corresponding lemma forms; in the latter case an alternative to Zipf's law can be proposed, consisting of a double power-law distribution (which is a distribution with two power-law regimes that have different exponents). Our findings for words and lemmas suggest that the previous observation that the exponent in Zipf's law depends on text length [25,31,32], might be an artifact of the increasing weight of a second regime in the distribution of frequencies beyond a certain text length. Section 4 investigates the implications of our scaling approach for Heaps' law. Although the scaling ansatz we propose has a counterpart in the rank-frequency representation, we prefer to illustrate it in terms of the distribution of frequencies, as this approach has been deemed more appropriate from a statistical point of view [36].

The scaling form of the word-frequency distribution
Let us come back to the rank-frequency relation, in which the absolute frequency n of each type is a function of its rank r . Defining the relative frequency as x ≡ n/L and inverting the relationship, we can write Note that here we are not assuming a power-law relationship between r and x, just a generic function G L , which may depend on the text length L. Instead of the three assumptions introduced by Bernhardsson et al we just need one assumption, which is the independence of the function G L with respect to L; so This turns out to be a scaling law, with G(x) a scaling function. It means that if in the first 10 000 tokens of a book there are five types with relative frequency larger than or equal to 2%, that is, G(0.02) = 5, then this will still be true for the first 20 000 tokens, and for the first 100 000 and for the whole book. These types need not necessarily be the same ones, although in some cases they might be. In fact, instead of assuming as in [32] that the frequency of the most used type scales linearly with L, what we assume is just that this is true for all types, at least on average. Notice that this is not a straightforward assumption, as, for instance [5], considers instead that n is just a (particular) function of r/V L . Now let us introduce the survivor function or complementary cumulative distribution function S L (n) of the absolute frequency, defined in a text of length L as S L (n) = Prob[frequency n]. Note that, estimating from empirical data, S L (n) turns out to be essentially the rank, but divided by the total number of ranks, V L , i.e. S L (n) = r/V L . Therefore, using our ansatz for r we get Within a continuous approximation the probability mass function of n, D L (n) = Prob[frequency = n], can be obtained from the derivative of S L (n): where g is minus the derivative of G, i.e. g(x) = −G (x). If one does not trust the continuous approximation, one can write D L (n) = S L (n) − S L (n + 1) and perform a Taylor expansion, for which the result is the same, but with g(x) −G (x). In this way, we obtain simple forms for S L (n) and D L (n), which are analogous to standard scaling laws, except for the fact that we have not specified how V L changes with L. If Heaps' law holds, V L ∝ L α , we recover a standard scaling law, D L (n) = g(n/L)/L 1+α , which fulfills invariance under a scaling transformation, or, equivalently, fulfills the definition of a generalized homogeneous function [24,37] where λ L , λ n and λ D are the scale factors, related in this case through λ n = λ L ≡ λ and λ D = 1 λ 1+α . Table 1. Total text length and vocabulary before (L tot , V tot ) and after (L (l) tot , V (l) tot ) the lemmatization process, for all the books considered (including also their author, language and publication year). The text length for lemmas is shorter than for words because for a number of word tokens their corresponding lemma type could not be determined, and they were ignored.

Title
Author However, in general (if Heaps' law does not hold), the distribution D L (n) still is invariant under a scale transformation but with a different relation for λ D , which is So, D L (n) is not a generalized homogeneous function, but presents an even more general form.
In any case, the validity of the proposed scaling law, equation (1), can be checked by performing a very simple rescaled plot, displaying L V L D L (n) versus n/L. A resulting data collapse support the independence of the scaling function with respect to L. This is undertaken in section 3.

Data analysis results
To test the validity of our predictions, summarized in equation (2), we analyze a corpus of literary texts, comprised by seven large books in English, Spanish and French (among them, some of the longest novels ever written, in order to have as much statistics of homogeneous texts as possible). In addition to the statistics of the words in the texts, we consider the statistics of lemmas (roughly speaking, the stem forms of the word; for instance, dog for dogs). In the lemmatized version of each text, each word is substituted by its corresponding lemma, and the statistics are collected in the same way as they are collected for word forms. Appendix A provides detailed information on the lemmatization procedure, and table 1 summarizes the most relevant characteristics of the analyzed books. First, we plot the distributions of word frequencies, D L (n) versus n, for each book, considering either the whole book or the first L/L tot fraction, where L tot is the real, complete text length (i.e. if L = L tot /2 we consider just the first half of the book, no average is performed over parts of size L). For a fixed book, we observe that different L leads to distributions with small but clear differences, see figure 1. The pattern described by Bernhardsson et al (equivalent to Zipf's findings for the change of the exponent β) seems to hold, as the absolute value of the slope in log-log scale (i.e. the apparent power-law exponent γ ) decreases with increasing text length.
However, a scaling analysis reveals an alternative picture. As suggested by equation (2) a unique L-independent function for each book, which represents the scaling function g(x). Figure 2 shows this for the same books and parts of the books as in figure 1. The data collapse can be considered excellent, except for the smallest frequencies. For the largest L the collapse is valid up to n 3 if we exclude La Regenta, which only collapses for about n 6. So, our scaling hypothesis is validated, independently of the particular shape that g(x) takes. Note that g(x) is independent of L but not the book, i.e. each book has its own g(x), different from the rest. In any case, we observe a slightly convex shape in log-log space, which leads to the rejection of the power-law hypothesis for the whole range of frequencies. Nevertheless, the data does not show any clear parametric functional form. A double power law, a stretched exponential, a Weibull  (2)) and making it clear that the decrease of the log-log slope with L is not a consequence of a genuine change in the scaling properties of the distribution.
or a lognormal tail could be fit to the distributions. This is not incompatible with the fact that the large n tail can be well fit by a power law (the Zipf's law), for more than two orders of magnitude [36]. Things turn out to be somewhat different after the lemmatization process. The scaling ansatz is still clearly valid for the frequency distributions, see figure 3, but with a different kind of scaling function g(x), with a more defined characteristic shape, due to a more pronounced  (3), is also included. log-log curvature or convexity. In fact, close examination of the data leads us to conclude that the lemmatization process enhances the goodness of the scaling approximation, specially in the low-frequency zone. It could be reasoned that, as lemmatized texts have a significantly reduced vocabulary compared to the original ones, but the total length remains essentially the same, they are somehow equivalent to much longer texts, if one considers the length-tovocabulary ratio. Although this matter needs to be further investigated, it supports the idea that our main hypothesis, the scale-invariance of the distribution of frequencies, holds more strongly for longer texts. Due to the clear curvature of g(x) in the lemmatized case, we go one step further and propose a concrete function to fit these data, namely This function has two free parameters, a and γ (with γ > 1 and a > 0), and behaves as a double power law, that is, for large x, g(x) ∼ x −γ (we still have Zipf's law), while for small x, g(x) ∼ x −1 . The transition point between both power-law tails is determined by a (more precisely, by a 1 γ −1 ), and k is fixed by normalization. But an important issue is that it is not g(x) which is normalized to one but D L (n). We select a power-law with exponent one for small x for three reasons: firstly, in order to explore an alternative to the power law in the V L versus L relation (which is not clearly supported by the data, see next section); secondly, to allow for a better comparison of our results and those of [32]; thirdly, to keep the number of parameters minimum. Thus, we do not look for the most accurate fit but for the simplest description of the data.
Then, defining n a = a 1 γ −1 L, the corresponding word-frequency density (or, more properly, lemma-frequency density or type-frequency density) turns out to be with n a the scale parameter (recall that the scale parameter of g(x) was a 1 γ −1 ). The data collapse in figure 3 and the good fit imply that the Zipf-like exponent γ does not depend on L, but the transition point between both power laws, n a , obviously does. Hence, as L grows the transition to the ∼ n −γ regime occurs at higher absolute frequencies, given by n a , but fixed relative frequencies, given by a 1 γ −1 . In table 2 we report the fitted parameters for all seven books, obtained by maximum likelihood estimation (MLE) of the frequencies of the whole books, as well as Monte Carlo estimates of their uncertainties. We have confirmed the stability of γ fitting only a power-law tail from a fixed common relative frequency, for different values of L [36].
Regarding the low-frequency exponent, one could find a better fit if the exponent was not fixed to be one; however, our data does not allow this value to be well constrained. A more important point is the influence of lemmatization errors in the characteristics of the lowfrequency regime. Although the tools we use are rather accurate, rare words are likely to be assigned a wrong lemma. This limitation is intrinsic to current computational tools and has to be considered as a part of the lemmatization process. Nevertheless, the fact that the behavior at low frequencies is robust in front of a large variation in the percentage of lemmatization errors implies that our result is a genuine consequence of the lemmatization. See appendix A for more details.
Although double power laws have been previously fit to rank-frequency plots for unlemmatized multi-author corpora [27,38,39], the resulting exponents for large ranks (low frequencies) are different than the ones obtained for our lemmatized single-author texts. Note that [27] also proposed that the crossover between both power laws happened for a constant number of types, around 7900, independently of corpus size. This corresponds indeed to r = 7900 and therefore, from equation (1), to a fixed relative frequency. This is certainly in agreement with our results, supporting the hypothesis that rank-frequency plots and frequency distributions are stable in terms of relative frequency.

An asymptotic approximation of Heaps' law
Coming back to our scaling ansatz, equation (2), the normalization of D L (n) will allow us to establish a relationship between the word-frequency distribution and the growth of the vocabulary with text length. In the continuous approximation where we have used the previous relation g(x) = −G (x), and have additionally imposed G(∞) ≡ 0, for which it is necessary that g(x) decays faster than a power law with exponent one. So, This just means, compared to equation (1), that the number of types with relative frequency greater or equal than 1/L is the vocabulary size V L , as this is the largest rank for a text of length L. It is important to notice the difference between saying that G L (1/L) = V L , which is a trivial statement, and stating that G(1/L) = V L , which provides a link between Zipf's and Heaps' law, or, more generally, between the distribution of frequencies and the vocabulary growth, by approximating the latter by the former. The quality of such an approximation will depend, of course, on the goodness of the scale-invariance approximation. In the usual case of a power-law distribution of frequencies extending to the lowest values, g(x) ∝ 1/x γ , with γ > 1, then G(x) ∝ 1/x γ −1 , which turns into Heaps' law, V L ∝ L α , with α = γ − 1, in agreement with previous research [2,5,29,30,32]. However, this power-law growth of V L with L is not what is observed in texts, in general. Due to the accurate fit that we can achieve for lemmatized texts, we can explicitly derive an V(L) G L 6 (1/L) G L 5 (1/L) G L 4 (1/L) G L 3 (1/L) G L 2 (1/L) G L 1 (1/L) Eq. (6) asymptotic expression for V L given our proposal for g(x). As we have just shown, g(x) is not normalized to one, rather, ∞ 1/L g(x) dx = V L . Hence, substituting g(x) from equation (2) and integrating ln(a L γ −1 + 1).
In this case V L is not a power law, and behaves asymptotically as ∝ ln L. This is a direct consequence of our choice for the exponent 1 in the left-tail of g(x). Indeed, it seems clear that the vocabulary growth curve greatly deviates from a straight line in log-log space, for it displays a prominent convexity, see figure 4 as an example. Nevertheless, the result from equation (6) is not a good fit either, due to a wrong proportionality constant. This is caused by the continuous approximation in equation (6).
For an accurate calculation of V L we must treat our variables as discrete and compute discrete sums rather than integrals. In the exact, discrete treatment of D L (n), equation (6) must be rewritten as where we have used the fact that S L tot (n ) = n n D L tot (n), with n = L tot /L (notice that in the discrete case, g(x) = −G (x)). This is consistent with the fact that, indeed, the maximum likelihood parameters γ and a have been computed assuming a discrete probability function (see appendix B), and so has the normalization constant. We would like to stress that no fit is performed in figure 4, that is, the constant k in g(x) is directly derived from the normalizing constant of D L (n), and depends only on γ and a.

Conclusions
In summary, we have shown that, contrary to claims in previous research [25,31,32], Zipf's law in linguistics is extraordinarily stable under changes in the size of the analyzed text. A scaling function g(x) provides a constant shape for the distribution of frequencies of each text, D L (n), no matter its length L, which only enters into the distribution as a scale parameter and determines the size of the vocabulary V L . The apparent size-dependent exponent found previously seems to be an artifact of the slight convexity of g(x) in a log-log plot, which is more clearly observed for very small values of x, accessible only for the largest text lengths. Moreover, we find that in the case of lemmatized texts the distribution can be well described by a double power law, with a large-frequency exponent γ that does not depend on L, and a transition point n a that scales linearly with L. The small-frequency exponent is different than the ones reported in [27,38] for non-lemmatized corpora. Further, the stability of the shape of the frequency distribution allows one to predict the growth of vocabulary size with text length, resulting in a generalization of the popular Heaps' law. The robustness of Zipf-like parameters under changes in system size opens the way to more practical applications of word statistics. In particular, we provide a consistent way to compare statistical properties of texts with different lengths [40]. Another interesting issue would be the application of the same scaling methods to other fields in which Zipf's law has been proposed to hold, as economics and demography, for instance. of the stream of consciousness prose, which uses many non-standard word forms, and Artamène, because 17th century French contains many word forms that a dictionary of modern French does not include.

Appendix B. Maximum likelihood fitting
The fitted values of table 2 have been obtained by MLE. This well-known procedure consists firstly in computing the log-likelihood function L, which in our case reads with n i the V L values of the frequency and the normalization constant K in the discrete case equal to Note that we have reparameterized the distribution compared to the main text, introducing b = n γ −1 a = a L γ −1 . Then, L is maximized with respect to the parameters γ and b; this has been done numerically using the simplex method [42]. The error terms σ γ and σ b , representing the standard deviation of each estimator, are computed from Monte Carlo simulations. From the resulting maximum-likelihood parameters γ * and b * , synthetic data samples are simulated, and the MLE parameters of these samples are calculated in the same way; their fluctuations yield σ γ and σ b . We stress that no continuous approximation has been made, that is, the simulated data follows the discrete probability function D L (n) (this is done using the rejection method, see [36,43] for details for a similar case). In a summarized recipe, the procedure simply is: 1. numerically compute the MLE parameters, γ * and b * ; 2. draw M datasets, each of size V L , from the discrete probability function D L (n; γ * , b * ); 3. for each dataset m = 1, . . . , M, compute the MLE parameters γ m , b m ; 4. compute the standard deviations σ γ and σ b of the sets {γ m } M m=1 and {b m } M m=1 ; The standard deviations of n a and a are computed in the same way using their relationship to b and γ .