From: hlm@math.lsa.umich.edu (Hugh Montgomery)
Subject: Re: large excursions of pi(x) away from Li(x)
Date: 30 Sep 99 02:18:48 GMT
Newsgroups: sci.math.numberthy
Dear Jean-Pierre:
Concerning pi(x) - li(x) one has to distinguish between whas is known,
and what might be conjectured. I believe that Gauss compared pi(x) and li(x)
for some quite large x, and noted that pi(x) < li(x) for all the x he
considered. I don't know if he actually conjectured that this persists, but
in any case, in 1914 a young J. E. Littlewood showed that
pi(x) - li(x) = Omega_{\pm}(x^{1/2}(logloglog x)/log x).
In fact E. Schmidt had shown in 1903 that
pi(x) - li(x) = Omega_{-}(x^{1/2}/log x);
the interesting feature of Littlewood's improvement is not so much the
logloglog, but rather the fact that he shows that the quantity is sometimes
positive.
If the Riemann Hypothesis is false, then better lower bounds can be
given, in both signs. For example, if pi(x) - li(x) < Cx^Theta for all
sufficiently large x, or if pi(x) - li(x) > -Cx^Theta for all sufficiently
large x, then zeta(s) \ne 0 for Re s > Theta.
Suppose the Riemann Hypothesis is true. As you note,
von Koch deduced the estimate pi(x) = li(x) +O(x^{1/2}log x). The quantity
(pi(x) - li(x))log x
--------------------
x^{1/2}
does not have a limiting distribution, because it moves progressively slower
and slower. But if one makes the change of variables x = e^y then it is
a mean-square almost-periodic function of y whose Fourier expansion is
-1 - sum_{rho} e^{i*gamma*y}/rho.
Here rho = 1/2 + i*gamma runs over all non-trivial zeros of the zeta function.
Because of the strong negative bias (which arises because one eliminates
the squares of the primes when passing from psi(x) to pi(x)), Littlewood's
challenge was to get the contributions of many of the smaller zeros to pull
in the same direction. This function has a limiting distribution function,
which can be calculated with some precision, and graphed. The density function
is everywhere positive on the real line, and thus the function is positive
a certain asymptotic density of the time (with respect to y). With respect
to x, one has a logarithmic density: Integrate over that part of the
set in [1, X] dx/x, divide by log X, and take the limit. If one assumes
not just RH but also that the ordinates gamma > 0 are linearly independent
over the field of rational numbers, then the functions e^{i*gamma*y} behave
like independent random variables, and the asymptotic distribution of the
function is the same as the distribution function of a sum of independent
random variables. For such sums one can estimate the likelihood of a
large deviation (see Montgomery & Odlyzko, Acta Arith. 49 (1988), 427--434).
By applying such techniques to the present situation, I deduced
(Proc. Queen's Number Theory Conference, 1979) that
exp(-c_2*V^{1/2}*exp(sqrt(2*pi*V)) < P(f>V) < exp(-c_1*V^{1/2}*exp(sqrt(2*pi*V))
when V is large. Here c_1 and c_2 are positive constants. The rate of decay
of this distribution function led me, in 1980, to formulate the following
conjecture concerning the maximum size of the error term in the prime number
theorem:
limsup (pi(x)-li(x))*x^{-1/2}*(log x)*(logloglog x)^{-2} = 1/(2*pi),
liminf (pi(x) - li(x))*x^{-1/2}*(log x)*(logloglog x)^{-2} = -1/(2*pi).
For more on this topic see the (unpublished) University of Michigan PhD thesis
of Reynolds Monach (1980), and also Rubinstein & Sarnak, Experiment. Math.
3 (1996), 173--197.
I hope this helps.
Best wishes,
Hugh Montgomery