Real zeros of random Dirichlet series

Let $F(\sigma)$ be the random Dirichlet series $F(\sigma)=\sum_{p\in\mathcal{P}} \frac{X_p}{p^\sigma}$, where $\mathcal{P}$ is an increasing sequence of positive real numbers and $(X_p)_{p\in\mathcal{P}}$ is a sequence of i.i.d. random variables with $\mathbb{P}(X_1=1)=\mathbb{P}(X_1=-1)=1/2$. We prove that, for certain conditions on $\mathcal{P}$, if $\sum_{p\in\mathcal{P}}\frac{1}{p}<\infty$ then with positive probability $F(\sigma)$ has no real zeros while if $\sum_{p\in\mathcal{P}}\frac{1}{p}=\infty$, almost surely $F(\sigma)$ has an infinite number of real zeros.

1 p < ∞ then with positive probability F (σ) has no real zeros while if p∈P 1 p = ∞, almost surely F (σ) has an infinite number of real zeros.

Introduction.
A Dirichlet series is an infinite sum of the form F (σ) := p∈P Xp p σ , where P is an increasing sequence of positive real numbers and (X p ) p∈P is any sequence of complex numbers. If F (σ) converges then F (s) converges for all s ∈ C with real part greater than σ (see [4]  In this paper we are interested in the real zeros of the random Dirichlet series F (σ) := p∈P Xp p σ , where the coefficients (X p ) p∈P are random and P satisfies: p σ has abcissa of convergence σ c = 1.
For instance, P can be the set of the natural numbers. The conditions (P 1−P 2) imply, in particular, that the series p∈P 1 p 2σ converges for each σ > 1/2. Therefore, if (X p ) p∈P is a sequence of i.i.d. random variables with EX p = 0 and EX 2 p = 1, then, by the Kolmogorov one-series Theorem, the series F (σ) = p∈P Xp p σ has a.s. abscissa of convergence σ c = 1/2. Moreover, the function of one complex variable σ+it → F (σ+it) is a.s. an analytic function in the half plane {σ+it ∈ C : σ > 1/2}.
Our main result states: Theorem 1.1. Assume that P satisfies P 1-P 2 and let (X p ) p∈P be i.i.d and such that P(X p = 1) = P(X p = −1) = 1/2. Let F (σ) = p∈P Xp p σ . i. If p∈P 1 p < ∞, then with positive probability F has no real zeros; ii. If p∈P 1 p = ∞, then a.s. F has an infinite number of real zeros.
It follows as corollary to the proof of item i. that in the case p∈P 1 p = ∞, with positive probability F (σ) has no zeros in the interval [1/2 + δ, ∞), for fixed δ > 0.
Since a Dirichlet series F (s) = p∈P Xp p s is a random analytic function, it can be viewed as a random Taylor series ∞ k=0 Y k (s − a) k , where a > σ c and (Y k ) k∈N are random and dependent random variables. The case of random Taylor series and random polynomials where (Y k ) k∈N are i.i.d. has been widely studied in the literature, for an historical background we refer to [3] and [5] and the references therein.

Notation.
We employ both f (x) = O(g(x)) and Vinogradov's f (x) g(x) to mean that there exists a constant c > 0 such that |f (x)| ≤ c|g(x)| for all sufficiently large x, or when x is sufficiently close to a certain real number y. For σ ∈ R, H σ denotes the half plane {z ∈ C : Re(z) > σ}. The indicator function of a set S is denoted by 1 S (s) and it is equal to 1 if s ∈ S, or equal to 0 otherwise. We let π(x) to denote the counting function of P: π(x) := |{p ≤ x : p ∈ P}|.

The Mellin transform for Dirichlet series.
In what follows P = {p 1 < p 2 < ...} is a set of non-negative real numbers satisfying P 1-P 2 above. A generic element of P is de noted by p, and we employ p≤x to denote p∈P;p≤x . Let A(x) = p≤x X p and F (s) = p∈P Xp p s . Let σ c > 0 be the abscissa of convergence of F (σ). Then F can be represented as the Mellin transform of the function A(x) (see, for instance, Theorem 1.3 of [4]): In particular, we can state: Xp p s be such that F (1/2) is convergent. Then for each σ ≥ 1/2 and all > 0, for all U > 1: where the implied constant in the O(·) term above can be taken to be 1.

A few facts about sums of independent random variables. In what follows we use
Levy's maximal inequality: Let X 1 , ..., X n be independent random variables.

Proof of the main result
Proof of item i. Since p∈P 1 p < ∞ we have by the Kolmogorov one series theorem that the series p∈P Xp √ p converges almost surely. In what follows U > 0 is a large fixed number to be chosen later, A U is the event in which X p = 1 for all p ≤ U and We claim that for sufficiently large U on the event A U ∩ B U the function F (s) = p∈P Xp p s does not vanish for all s ≥ 1 2 . Further for sufficiently large U we will show that P( On the event A U ∩ B U we have by lemma 2.1 that In fact, this is a consequence from P2: For any δ > 0 the series diverges p∈P To show that this is true we argue by contraposition: Assume that for some fixed δ > 0 lim sup U →∞ π(U ) U 1−δ < ∞ and hence that there exists a constant c > 0 such that for all U > 0, π(U ) ≤ cU 1−δ . In that case we have for 0 < < δ and hence that the series p∈P 1 p 1− converges. Therefore, we showed that lim sup U →∞ π(U ) U 1−δ < ∞ implies that p∈P 1 p σ has abscissa of convergence σ c ≤ 1 − δ. Now we may select arbitrarily large values of U > 1 for which π(U ) ≥ U 1−1/4 and p≤U 1 √ p > 1 10 , and hence, by (4), for all > 0 we obtain that This proves that on the event A U ∩ B U we have that F (s) = 0 for all s ∈ [1/2, ∞).
Observe that A U and B U are independent and A U has probability 1 2 π(U ) > 0. Now we will show that the complementary event B c U has small probability. Indeed, by applying the Levy's maximal inequality and the Hoeffding's inequality, we obtain: Since p∈P 1 p is convergent, the tail p>U 1 p converges to 0 as U → ∞. Therefore, for sufficiently large U we can make P(B c U ) < 1/2. Now we are going to prove Theorem 1.1 part ii. We present two different proofs.
In the first proof we assume that the counting function of P In this case, for instance, P can be the set of prime numbers. In this proof we show that, for σ close to 1/2, the infinite sum p∈P Xp p σ can be approximated by the partial sum p≤y Xp √ p for a suitable choice of y (Lemma 3.1). Then we show that these partial sums change sign for an infinite number of y, and hence, F (σ) = p∈P

Proof of Theorem 1.1 (ii) in the case π(x)
x log x . Lemma 3.1. Assume that P satisfies P 1-P 2 and that p∈P 1 p = ∞. Further, assume that π(x) x log x . Let σ > 1/2 and y = exp((2σ − 1) −1 ) ≥ 10. Then there is a constant d > 0 such that for all λ > 0 Proof. If |a + b| ≥ 2λ then either |a| ≥ λ or |b| ≥ λ. This fact combined with the Hoeffding's inequality allows us to bound: To complete the proof we only need to estimate these quantities. By the mean value theorem In particular, the choice y = exp((2σ − 1) −1 ) implies that both variances V y and U y are O(1).
The simple random walk S n = n k=1 X n where (X n ) n∈N is i.i.d with X 1 = ±1 with probability 1/2 each, satisfies a.s. lim sup n→∞ S n = ∞ and lim inf n→∞ S n = −∞. We follow the same line of reasoning as in the proof of this result ( [6] pg. 381, Theorem 2) to prove: Proof. We begin by observing that (X p / √ p) p∈P is a sequence of independent and symmetric random variables that are uniformly bounded by 1. It follows that and hence this sequence satisfies the Lindenberg condition. By the Central Limit Theorem it follows that for each fixed L > 0 there exists a δ > 0 such that for sufficiently large y > 0 Next observe that the event in which lim sup k→∞ p≤y k Xp √ p p≤y k 1 p ≥ L is a tail event, and hence by the Kolmogorov zero or one law it has either probability zero or one. Since it follows that for each fixed L > 0 lim sup k→∞  Thus, by making y → ∞ in the equation above, we obtain the desired claim.