Pair Correlation of the Fractional Parts of αn θ

Fix α, θ > 0, and consider the sequence ( αn θ mod 1) n ≥ 1 . Since the seminal work of Rudnick– Sarnak (1998), and due to the Berry–Tabor conjecture in quantum chaos, the ﬁne-scale properties of these dilated mononomial sequences have been intensively studied. In this paper we show that for θ < 14 / 41, and α > 0, the pair correlation function is Poissonian. While (for a given θ 6 = 1) this strong pseudo-randomness property has been proven for almost all values of α , there are next-to-no instances where this has been proven for explicit α . Our result holds for all α > 0 and relies solely on classical Fourier analytic techniques. This addresses (in the sharpest possible way) a problem posed by Aistleitner–El-Baz–Munsch (2021).


Introduction
Let x = (xn) n≥1 be a sequence on the unit interval [0, 1).The pair correlation function of x measures the correlation between points in the initial segment {xn : n ≤ N } on the scale of the mean spacing, 1/N , and is defined by where f ∈ C ∞ c (R) is a compactly supported, C ∞ -function.The sequence x is said to have Poissonian pair correlation if the pair correlation function converges to the integral of f (over R) as N → ∞, just as one would expect for uniformly distributed and independent random variables.That is, the sequence x has Poissonian pair correlation if for all f ∈ C ∞ c (R) lim (1.2) The notion of Poissonian pair correlation defines a strong measure of pseudo-randomness and is a basic concept in quantum chaos.Unsurprisingly, various efforts have been made [RS98, BZ00, RZ02, MS03, HB10, ALL17] to study the pair correlation function of monomial sequences (αn θ mod 1) n≥1 , (1.3) where θ > 0 and α > 0. However, little progress has been made to verify that the pair correlation of such monomial sequences is Poissonian (under explicit conditions on α, θ).We present the state of the art for (1.3) in Section 1.1.In this paper we prove the first general and explicit result showing that such monomial sequences exhibit Poissonian pair correlation.Namely, Theorem 1.If θ ∈ (0, 14/41) and α > 0, then (1.3) has Poissonian pair correlation.

Remark.
1. Our method applies to higher level correlations, although this generalisation is not straightforward as it requires a genuinely multidimensional approach (see [LT21], and also [LT22]).Moreover, the only arithmetic input of our method are exponential sum bounds.Thus, with some modification, the method can be extended to more general sequences satisfying certain growth conditions.
2. The method of proof allows one to show that the pair correlation function converges to R f (x) dx with a polynomially decaying error in N which is uniform for all α in a fixed compact interval.
3. When θ = 1/3 and α 3 ∈ Q, then the triple correlation is not Poissonian (because of the cubes n 3 ).Thus, Theorem 1 gives an example of a sequence whose pair correlation is Poissonian, but whose triple correlation is not.
Organization of the Paper: Subsection 1.1 presents a brief history of these monomial sequences.Subsection 1.2 sketches the proof of Theorem 1, and Subsection 1.3 provides a heuristic argument which indicates the limitations of our method.In Section 2 we collect lemmata, reducing matters to bounding certain exponential sums.Finally, in Section 3 we prove Theorem 1.

Background
The study of monomial sequences dates back to Weyl [Wey16] who used them in his study of uniform distribution (see [KN74] or [DT06]).More recently, there has been renewed interest in these sequences.
In part, this is due to the well-known Berry-Tabor conjecture [BT77] which hypothesizes a link between the pseudo-randomness properties of energy levels, and dynamics of quantum systems.For more details, see either of the following review papers [Mar00,Rud08].
The holy grail of this field is to find circumstances for which a sequence has Poissonian gap statistics.That is, consider the distribution of gaps between neighboring first N elements of the sequence -scaled to have average 1 -then we say the sequence exhibits Poissonian gap statistics if this (finite) distribution converges to the exponential distribution, as one would expect for independent random variables.While the aforementioned behavior is conjectured in many instances, it is truly challenging to prove.Thus, mathematicians have turned to weaker measures of pseudo-randomness.In particular, there has been a lot of recent work on the pair correlation.Indeed, if one could show that the m-level correlation converges to the expected value for independent random variables (for every m ≥ 2) then, by the method of moments, one can infer that the sequence has Poissonian gap statistics.
If we consider the random variable counting the number of sequence elements in a randomly shifted set of size comparable to 1/N , then the m-level correlations arise from the moments of this variable.Thus, the m-level correlations are natural measures of pseudo-randomness in their own right.We refer to [Mar07] for further discussion.

Pair correlation of deterministic sequences
The few deterministic sequences whose pair correlation functions are known to be Poissonian either require the presence of particularly strong arithmetic structure, or tools from homogeneous dynamics to apply.An example of the former is the work of Kurlberg and Rudnick [KR99] on the (appropriately normalised) spacing of the quadratic residues of a highly composite modulus.In fact, they show that the gap statistics are Poissonian.However this setting requires the use of arithmetic tools which cannot be relied on in our situation.
On the homogeneous dynamics side, Elkies and McMullen [EM04] established a remarkable link between (1.3), for (θ, α) = (1/2, 1), and flows on the modular surface SL 2 (R)/ SL 2 (Z).They used this connection, and tools from homogeneous dynamics, to establish that the corresponding gap distribution is not Poissonian.Surprisingly, El-Baz, Marklof, and Vinogradov [EBMV15] then exploited said relationship further to show that, if one removed the squares, the pair correlation is Poissonian.However, the connection to homogeneous dynamics requires a particular scaling property which only holds when θ = 1/2 and α 2 ∈ Q. Indeed for α 2 ∈ Q it is conjectured [EM04] the gap statistics are Poissonian.
For the sequence (αn mod 1) n≥1 , the three gap theorem (also known as the Steinhaus conjecture) states that the size of the gaps between neighboring points, at any time N , form a set of cardinality at most 3. Hence, the local statistics are certainly not Poissonian.For background see [MS17,MK98].

Metric Poisson Pair Correlation
Generally speaking, it is believed that, given a θ > 0, the pseudo-random properties of (1.3) are determined by the Diophantine properties of α (e.g see [RS98, Remark 1.2]).However, in the absence of methods to prove Poissonian pair correlation for explicit values of α, Rudnick and Sarnak [RS98] introduced the concept of metric Poisson pair correlation.Namely a general sequence (xn) n≥1 has metric Poisson pair correlation, if the dilated sequence (αxn mod 1) n≥1 has Poissonian pair correlation for all α > 0 outside of a (Lebesgue) null set.
Special attention has been given to the quadratic case, θ = 2, due to its connection with quantum chaos and the boxed harmonic oscillator.Here, Heath-Brown [HB10] gave an algorithmic construction of a dense set of α for which the pair correlation is Poissonian.Moreover, there have been some results for longer-range correlations [TW20,Lut22], convergence along sparse subsequences [RSZ01,FKZ21], and minimal gaps [Zah95,Reg21,Rud18].However, finding explicit α for which the pair correlation is Poissonian remains out of reach.
Finally, it is worth noting that the metric Poisson pair correlation theory has been generalized beyond monomial sequences and exploits some deep connections to additive combinatorics [ALL17,BW20].However, this connection is beyond the scope of this paper.
Notation: Throughout, we use the usual Bachmann-Landau notation: for functions f, g : X → R, defined on some set X, we write f ≪ g (or f = O(g)) to denote that there exists a constant Throughout we denote e(x) = e 2πix and f is the Fourier transform (on R) of f .All of the sums which appear range over integers, in the indicated interval.As α, ε, θ, and f are considered fixed, we suppress any dependence in the implied constants.Moreover, for ease of notation, ε > 0 may vary from line to line by a bounded constant.Further, we will frequently encounter the exponent

Idea of the Proof
The proof relies on a well-known Fourier decomposition.First, we include the diagonal term in the pair correlation function, to simplify technicalities.Thus, define (1.4) Note that Theorem 1 is equivalent to showing that (as (1.5) By the Poisson summation formula, (1.6) for ε > 0 where the o(1)-error comes from the fast decay of f .Note that f can be decomposed into a sum of an even and an odd function.Further, the Fourier coefficients of the odd part cancel out, and the Fourier coefficients of the even part are even functions themselves.Thus, without loss of generality we may assume f is even.Hence, it suffices to show that (1.7) To achieve the desired bound requires a detailed analysis of the exponential sums in (1.7).We argue in, roughly, two steps: first we decompose the innermost summation, and apply van der Corput's B-process to obtain a saving in the y-summation.Second, we expand the square and use some analytic tricks to reduce the estimates to exponential sums over k.Now to obtain a saving in the k-summation, we again use the B-process coupled with other estimates (such as Weyl differencing).

Heuristic
After applying the B-process, interchanging the order of summation, extracting the main terms and dealing with the error terms, our task is the following.We need to show that ), and β and c depend only on θ and α.Now we apply partial summation to reduce matters to estimating If we had square root cancellation for this sum -uniformly in γ(r) -then our method yields Err ≈ N θ−1/2+ε .In other words, even with optimal bounds, we cannot hope to go past the barrier θ = 1/2 − ε.
To move past this barrier, our analytic method would require taking advantage of the cancellation between exponential sums for different values of r.This seems to be well beyond current technology.
Interestingly, if we consider instead the triple correlation function, the natural barrier to our methods turns out to be θ < 1/3.In fact as we consider higher and higher correlations, that barrier goes to 0.

Preliminaries
The following two results are fundamental in the modern study of exponential sums.First, we recall an application of Weyl's differencing method (called the A-Process, see [GK91, Theorem 2.9]): Theorem 2 (A-Process).Let l ≥ 0 be an integer and let where C > 1 is some fixed constant, and assume there exists a constant F > 0 such that where L := 2 l .The implicit constant in (1.9) depends on the choice of l and the implicit constant(s) in (1.8).
Further, we will use van der Corput's B-process, which follows from Poisson summation and a stationary phase argument (see [IK04,Theorem 8.16]): where xm denotes the unique solution to φ ′ (x) = m.Furthermore, where the implied constant is absolute.
We will often need to bound weighted exponential sums.To reduce these estimates to bounding unweighted sums, we use partial summation in the form of: Lemma 4. Let (as)s and (bs)s be sequences of complex numbers.Fix a constant c > 1.
for any positive integers S and S satisfying S ≤ S ≤ cS.
2 Reducing to Exponential Sum Bounds

Decomposing the sums and applying the B-Process
Now consider the term E (N ), defined in (1.7).We shall apply the B-process (Theorem 3) to the exponential sum in E (N ), but presently we do not have sufficient control on the derivative of y → αky θ .
To gain control, we use several decompositions.First we assume (w.l.o.g.) that N = N Q := Q Γ , for some fixed Γ > 0, which will be chosen to be sufficiently large in our proof (in a way depending on θ), as it is enough to prove the correlations converge along such a subsequence, see [RT20, Lemma 3.1].Thus, we decompose the inner summation into the pieces e(αky θ ) where Nq := q Γ .To catch the largest term, we set N Q+1 = N Q + 1.Thus, (2.1) Now, the next lemma shows that we can replace each Eq(k) by (2.2) which is the main term after applying the B-Process to Eq(k); the constants c 1 and β are defined by For later reference let R(k (2.3) Lemma 5. Let E (N ) and E (B) (N, f ) be defined as in (2.1) and (2.3) respectively.Then Proof.First we apply Theorem 3 to each Eq(k), q ≤ Q, with Λ = kαθ(1 − θ)N θ−2 q+1 and η = 2 5Γ .Hence, with the implied constant being uniform in q and k.Thus Taking Γ > 1/ε ensures that in the above error term N 1/Γ can be replaced by N ε .Squaring out and applying the Cauchy-Schwarz inequality yields now the lemma.

The Diagonal
Presently our goal is to establish (1.7).In view of Lemma 5, it suffices to prove that for in that case (1.7) is true and Theorem 1 will follow.Computing the estimates of is done by completely analogous way and, therefore, we give detailed proofs only for The main term, f (0) will come from the diagonal term when expanding the square in E (B) (N, f ).That is, in (2.3) we square out and consider the term (2.4) Proof.By a Riemann integral argument (see for example [Apo76, Theorem 3.2]) the sum Now (2.4) follows by the Poisson summation formula and the fact that f is an even function.The second statement of the lemma follows as above, only this time we employ the rapid decay of f to give an upper bound for D(N, | f |).

Partial Summation
Lemma 5 and Lemma 6 reduce the problem to estimating E (B) (N, g) − D(N, g) for g = f and | f |.To estimate these errors requires a second application of the B-process, this time to the k-variable.In order to have adequate control on the derivative of y → γ(r)y Θ in the next section, we introduce a second decomposition.In particular, let U ∈ N be such that e U ≤ N 1+ε < e U+1 .Then we may decompose the sum in k into a sum over intervals [e u , e u+1 ), the last one being [e U , N 1+ǫ ).Let The next lemma reduces matters further to bounding the unweighted exponential sums requiring the bound to be uniform in the size of γ, and the interval I. Therefore, we introduce sup With this maximal operator at hand, we have the following where 1 {P } is 1 if the property P is satisfied and 0 otherwise, Proof.For brevity, in this proof, let ).
We also consider only the case g = f since the second case follows by repeating the same arguments.
With this notation we have Thus, the r i which appear in the overall sum all fall within the ranges (2.6) Now we interchange the r and k summations.For each choice of r 1 and r 2 , we have that Note that this interval may sometimes be empty.With that, Next, we remove the weights via partial summation, Lemma 4. Let (2.7) for any k ∈ Kq(r) with the implied constant being absolute.The mean value theorem implies Using that k ≪ N 1+ε , yields (2.7).Note that Kq(r) ⊂ [e u , e u+1 ).Thus Lemma 4 is applicable and, in combination with (2.7), yields k∈Kq(r) e(γ(r)k Θ ) .
Note that each r i ∈ Rq i ,u satisfies r i ≍ e u N −(1−θ) qi for i = 1, 2. To reduce matters to exponential sums requires control of the difference γ(r), thus let T (j) denote the set of pairs (r 1 , r 2 ) ∈ Rq 1 ,u × Rq 2 ,u satisfying e j ≤ |r 2 − r 1 | < e j+1 .Recall q 1 ≤ q 2 .From now on we assume r 2 < r 1 , since the case r 2 > r 1 can be done in exactly the same way and therefore shall not be discussed in detail.Observe also that q 1 < q 2 and r 1 = r 2 implies that Kq(r) = ∅, i.e. this case gives zero contribution.Assume now q 1 < q 2 .Then On the other hand, Thus, provided r ∈ T (j) we deduce that γ(r) Moreover, the range of j can be constrained by which implies that j ∈ Ju,q 1 .Thus, we have the following bound We observe that Overall we infer, for q 1 < q 2 , that If q 1 = q 2 , we remove the diagonal r 1 = r 2 , which corresponds to the term Du,q 1 (N, f ), and apply exactly the same bound to the off-diagonal.
3 Proof of the Main Theorem

Exponential Sum Bounds
Thanks to Lemmas 5, 6, and 7, we will show that Theorem 1 can be deduced from the next lemma.
In fact, we require slightly more than the B-process to move past θ = 1/3.Thus we will apply the B-process precisely, and then use partial summation and the A-process to bound the resulting sum.First, set ϑ = 1/(1 − Θ) and apply Theorem 3, to conclude where a < b are positive integers of size γe u(Θ−1) and c 3 , c 4 are (complex, respectively real) nonzero constants.A trivial estimate implies that we can assume b − a ≥ 10.
By exploiting partial summation, we may apply Lemma 4 to the main term in (3.2).Thus, to prove (3.1), it suffices to bound:
The case q 2 < q 1 can be treated in the same way by interchanging the roles of q 1 and q 2 .Therefore, taking into account Lemma 6 and Lemma 7, we obtain that In a similar fashion Thus, for any θ < 14/41 the theorem follows by choosing Γ large enough and ε small enough.