Tuesday, November 25, 2014

Error term in the prime number theorem

We prove here an improved prime number theorem of the form
\[\begin{eqnarray}\pi(x)=Li(x)+O(x\exp(-c\log^{\frac{4}{7}}x)),\quad Li(x):=\int_{2}^{x}\frac{dt}{\log t},\end{eqnarray}\]
for all $c>0$, a result which is essentially due to Chudakov. The proof is based on bounding the growth of the Riemann zeta function in the critical strip near $\Re(s)=1$, and achieving this in turn builds on bounds for the zeta sums
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}.\end{eqnarray}\]
To bound these sums, we use a bound for exponential sums with slowly growing phase from this post (Theorem 5 (i)). The proof of that bound was based on Vinogradov's mean value theorem. When $M$ is large enough, however, the theorem from the previous post is no longer helpful, and we must follow a different approach based on Weyl differencing and van der Corput's inequality, which is quite useful by itself. That approach will be postponed to the next post, and here we concentrate on short zeta sums and on deducing the stronger prime number theorem. We follow Chandrasekharan's book Arithmetical functions.

We remark that without relying on exponential sums, one has not been able to prove anything significantly better than $\pi(x)=Li(x)+O(x\exp(-c\log^{\frac{1}{2}}x))$, which was proved by de la ValleĆ© Poussin already in 1899. On the other hand, the best known form of the prime number theorem is due to Vinogradov and Korobov, and it states that 
\[\begin{eqnarray}\pi(x)=Li(x)+O(x\exp(-c_0\log^{\frac{3}{5}}x(\log \log x)^{-\frac{1}{5}}))\end{eqnarray}\] for some $c_0>0$; this was proved nearly 60 years ago in 1958.

Bounds for short zeta sums

We divide our consideration of zeta sums into two parts; the latter part is presented in the next post. Here we consider sums having length at most $t^{\frac{1}{5}}$, since Theorem 5 (i) of the previous post produces nontrivial results then (part (ii) of that theorem works in a longer range, but not when the sums have length close to $t^{\frac{1}{2}}$ or are longer than that). When the zeta sums are very short, namely shorter than $\exp(A\log^{\frac{3}{4}}t(\log \log t)^\frac{1}{2})$, we do not obtain any nontrivial bounds, but improving the range of nontrivial bounds would further improve the error term in the prime number theorem.

By applying Vinogradov's exponential sum inequality from the previous post in the case $F(x)=\frac{-t\log x}{2\pi}$, we obtain the following.

Theorem 1. When $A(\log^{\frac{3}{4}}t)(\log \log t)^{\frac{1}{2}}\leq \log M\leq \frac{1}{5}\log t$ and $N\in [M,2M]$, where $A$ is a sufficiently large constant, we have
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll M \exp\left(-\frac{B\log M}{\log^{\frac{1}{2}}t}\right),\end{eqnarray}\]
where $B=B(A)\to \infty$ as $A\to \infty$.

Proof. For $F(x)=\frac{-t\log x}{2\pi}$ we have
\[\begin{eqnarray}\frac{F^{(k+1)}}{(k+1)!}=\frac{(-1)^{k+1}t}{2\pi (k+1)x^{k+1}}.\end{eqnarray}\]
On the interval $[M,N]$, with $N\leq 2M$, the function $F^{(k+1)}(x)$ varies by at most a factor of $2^{k+1}$, so the interval can be partitioned into $2^{k+1}$ subintervals, on each of which $F^{(k+1)}(x)$ varies by at most a factor of two. We choose
\[\begin{eqnarray}k=\left\lfloor\frac{\log t}{\log M}\right\rfloor+1,\, \log M\leq \frac{1}{5}\log t,\end{eqnarray}\]
so that $k\geq 6$. With this choice,
\[\begin{eqnarray}\frac{t}{2\pi(k+1)M^{k+1}}\leq \frac{t}{2\pi\left(\lfloor\frac{\log t}{\log M}\rfloor+1\right)Mt}\leq \frac{1}{2\pi M}\end{eqnarray}\]
for $M\leq t$, and moreover
\[\begin{eqnarray}\frac{t}{2\pi(k+1)(2M)^{k+1}}\geq \frac{t}{\pi\left(\lfloor\frac{\log t}{\log M}\rfloor+1\right)M^2t\cdot 2^{k+2}}\geq \frac{1}{M^4},\end{eqnarray}\]
provided
\[\begin{eqnarray}M^2\geq \pi\cdot\left(\frac{\log t}{\log M}+1\right)2^{\lfloor\frac{\log t}{\log M}\rfloor+3},\end{eqnarray}\]
which is true if we assume
\[\begin{eqnarray}\log M\geq 2\log^{\frac{1}{2}}t\end{eqnarray}\]
for large enough $t$. Hence
\[\begin{eqnarray}\frac{1}{M^4}\leq \lambda\leq \frac{1}{2M}\end{eqnarray}\]
for any
\[\begin{eqnarray}\lambda\in \left[\frac{t}{2\pi(k+1)N^{k+1}},\frac{t}{2\pi(k+1)M^{k+1}}\right].\end{eqnarray}\]
This means that the conditions of part $(i)$ of Theorem 5 from the previous post are satisfied, so if $[a,b]$ is any of the $2^{k+1}$ subintervals of $[M,N]$, we get
\[\begin{eqnarray}\sum_{a\leq n\leq b}n^{-it}\ll e^{c_1k^2\log k}M^{1-\frac{1}{100k^2\log k}},\quad 2\log^{\frac{1}{2}}t\leq \log M\leq \frac{1}{5}\log t,\end{eqnarray}\]
and the same estimate holds for the sum over $[M,N]$ if $c_1$ is replaced by $c_2=c_1+1,$ say. Using the definition of $k$, we find
\[\begin{eqnarray}M^{-\frac{1}{100k^2\log k}}\leq M^{-\frac{c_3}{\frac{\log^2 t}{\log^2 M}\log \log t}}=\exp\left(-\frac{c_3\log^3 M}{\log^2 t\log \log t}\right)\end{eqnarray}\]
for some $c_3>0$, so that
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll M\exp\left(c_2\frac{\log t \log \log t}{\log M}-\frac{c_3\log^3 M}{\log^2 t \log \log t}\right).\end{eqnarray}\]
This is nontrivial when
\[\begin{eqnarray}\log^4 M>A\log^3t (\log \log t)^2\end{eqnarray}\]
for a large enough constant $A>0$. In that situation, for some $B$, tending to infinity as $A$ does, we have
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll M\exp\left(-B\frac{\log M}{\log^{\frac{1}{2}}t}\right),\quad A\log^{\frac{3}{4}}t(\log \log t)^{\frac{1}{2}}\leq \log M\leq \frac{1}{5}\log t,\end{eqnarray}\]
which is the claimed inequality. ■

It is perhaps not immediately clear how much is saved in this estimate compared to the trivial bound $M$. For $c_4\log t\leq \log M\leq \frac{1}{5}\log t$ we have
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll M\exp(-c_5\log^{\frac{1}{2}} M),\end{eqnarray}\]
which is already a significant saving (though methods based on Weyl sums would eventually give sharper results in this range). At the other end of the range, for $\log M=A\log^{\frac{3}{4}}t(\log \log t)^{\frac{1}{2}}$ the bound is essentially
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll M\exp(-B\log^{\frac{1}{3}}M).\end{eqnarray}\]
However, it was not only amount of saving that was crucial in the theorem; another crucial thing was to push the range of nontrivial cancellation as far as possible -- in this case to $\log M\geq A(\log^{\frac{3}{4}}t)(\log \log t)^{\frac{1}{2}}$. We will see that this lower bound reflects to the zero-free region $\sigma\geq 1-\frac{c}{\log^{\frac{3}{4}}t(\log \log t)^{\frac{1}{2}}}$ that we will obtain for $\zeta(s)$.

The estimate above yields immediately a bound for the Dirichlet polynomials $\sum_{M\leq n\leq N}n^{-s}$ via the following simple lemma.

Lemma 2. For $0<\sigma<1$ and $M\leq N$ we have
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-\sigma-it}\ll N^{-\sigma}\max_{M\leq x\leq N}D(M,x),\end{eqnarray}\]
where
\[\begin{eqnarray}D(a,b):=\sum_{a\leq n\leq b}n^{-it}.\end{eqnarray}\]
Proof. Partial summation gives
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-\sigma-it}&=&N^{-\sigma}D(M,N)-\sum_{M\leq n\leq N}(n^{-\sigma}-(n+1)^{-\sigma})D(M,n)\\&\ll& N^{-\sigma}D(M,N)+\sum_{M\leq n\leq N}n^{-\sigma-1}\max_{M\leq n\leq N}D(M,x)\\&\ll& N^{-\sigma} \max_{M\leq x\leq N}D(M,x),\end{eqnarray}\]
proving the claim. ■

Long zeta sums and connection to the $\zeta$ function

We will soon see that there is a close connection between the Riemann zeta function in the critical strip and the zeta sum $D(1,t^2)$. If we had a connection between the zeta function and $D(1,t^{\frac{1}{5}})$, we could just use the estimates already established for zeta sums. However, to prove such a connection, we would have to bound longer zeta sums anyway. Using part $(ii)$ of Theorem 5, we may estimate some zeta sums longer than $t^{\frac{1}{5}}$, but an obstacle is that Theorem 5 holds only for $k\geq 3$, that is, only for derivatives of fourth and higher order. Hence we also need analogues of the theorem for the first, second and third derivatives (one could also use Weyl sums to get good bounds for long zeta sums, but that approach would be rather lengthy and technical). The first, second and third derivative tests for exponential sums will be developed in the next post, using Weyl differencing and van der Corput's inequality. Here we only state a result on zeta sums that is relevant to bounding the growth of the Riemann zeta function and the error in the prime number theorem.

Theorem 3. When $N\leq 2M$, we have
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll M^{1-\frac{1}{5000}}\end{eqnarray}\]
for $t^{\frac{1}{5}}\leq M\leq 2t^2$, with an absolute implied constant.

Proof. This will be proved in the next post using the first, second and third derivative tests. ■

The exponent $1-\frac{1}{5000}$ is certainly not the best the method gives us, but the point is that any exponent smaller than $1$ is equally good for bounding the Riemann zeta function, as for shorter zeta sums our estimates give weaker results than in the theorem above. Note that the theorem above fails without some upper bound for $M$, such as $M\leq 2t^2$, because if $t\log M$ is a multiple of $2\pi$, then
 \[\begin{eqnarray}\left|\sum_{M\leq n\leq (1+\frac{1}{100t})M}n^{-it}\right|&\geq&\Re\left(\sum_{M\leq n\leq (1+\frac{1}{100t})M}e\left(-\frac{t\log n}{2\pi}\right)\right)\\&\geq& \sum_{M\leq n\leq (1+\frac{1}{100t})M} \frac{1}{2}\gg \frac{M}{t}.\end{eqnarray}\]
It turns out, though, that no zeta sums longer than $2t^2$ are needed in the application to the Riemann zeta function. With the necessary bounds for the zeta sums available, we want to connect them to the zeta function in the critical strip, where the connection is perhaps not obvious from the definition. This is achieved in the following theorem.

Lemma 4. We have
\[\begin{eqnarray}\zeta(s)=\sum_{n\leq t^{\frac{1}{5}}}n^{-it}+O(1),\end{eqnarray}\]
uniformly in $\sigma \in [1-10^{-4},2]$.

Proof. We form a representation of the zeta function which is valid also in the critical strip and is related to the zeta sum $D(1,t^2)$. Using partial summation we see that
\[\begin{eqnarray}\zeta(s)-\sum_{n=1}^{N}n^{-s}&=&N^{1-s}+s\int_{N}^{\infty}\frac{\lfloor x\rfloor}{x^{s+1}}dx\\&=&\frac{N^{1-s}}{s-1}-s\int_{N}^{\infty}\frac{\{x\}}{x^{s+1}}dx,\end{eqnarray}\]
and this formula is actually valid whenever $\sigma>0$, as the last integral converges then. In particular,
\[\begin{eqnarray}\zeta(s)-\sum_{n\leq t^2}n^{-s}\ll \frac{t^{2(1-\sigma)}}{t}+t\int_{t^2}^{\infty}\frac{1}{x^{\sigma+1}}dx\ll t^{1-2\sigma}\ll 1.\end{eqnarray}\]
for $\frac{1}{2}\leq \sigma\leq 2.$ Now we shorten the sum approximating $\zeta(s)$, obtaining
\[\begin{eqnarray}\zeta(s)-\sum_{n\leq t^{\frac{1}{5}}}n^{-s}&\ll& 1+\left|\sum_{t^{\frac{1}{5}}\leq n\leq t^2}n^{-\sigma-it}\right|\\&\ll& 1+\sum_{2^k\leq 2t^{\frac{9}{5}}} \left|\sum_{2^k t^{\frac{1}{5}}\leq n\leq 2^{k+1}t^{\frac{1}{5}}}n^{-\sigma-it}\right|\\&\ll& 1+\sum_{2^k\leq 2t^{\frac{9}{5}}}(2^kt^{\frac{1}{5}})^{-\sigma} \max_{2^k t^{\frac{1}{5}}\leq x\leq 2^{k+1}t^{\frac{1}{5}}}|D(2^kt^{\frac{1}{5}},x)|\\&\ll& 1+\sum_{2^k\leq 2t^{\frac{9}{5}}}(2^kt^{\frac{1}{5}})^{1-\frac{1}{5000}-\sigma}\ll 1\end{eqnarray}\]
by Lemma 2 and Theorem 3, uniformly for $\sigma\in [1-10^{-4},2].$ ■

We can now prove a corresponding estimate for $\zeta(s)$.

Theorem 5. For $\frac{c_6}{\log^{\frac{1}{2}}t}\leq \sigma\leq 1$, where $c_6>0$ is appropriately chosen, and for $t$ large enough, it holds that
\[\begin{eqnarray}\zeta(s)\ll e^{c_7(\log^{\frac{1}{4}}t(\log \log t)^{\frac{1}{2}})}.\end{eqnarray}\]

Proof. Let $L=\exp(A(\log t)^{\frac{3}{4}}(\log \log t)^{\frac{1}{2}}),$ where $A$ is large enough, as in Theorem 1. We employ the previous theorem by writing
\[\begin{eqnarray}\zeta(s)&=&\sum_{n\leq L}n^{-s-it}+\sum_{L\leq n\leq t^{\frac{1}{5}}}n^{-s-it}+O(1), \quad 1-10^{-4}\leq \sigma\leq 1,\end{eqnarray}\]
and impose an additional condition $1-\eta(t)\leq \sigma\leq 1$, where $\eta(t)$ is to be determined. For the first sum, we obtain
\[\begin{eqnarray}\sum_{n\leq L}n^{-s-it}&\ll& \int_{1}^{L}\frac{1}{x^{\sigma}}dx\\&\ll& \frac{L^{\eta(t)}}{\eta(t)}.\quad (1)\end{eqnarray}\]
The second sum can be partitioned into a sum over dyadic intervals (the last one may be shorter than twice the previous one), and the choice of $L$ together with Theorem 3 permits us to write
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-s-it}&\ll& \exp\left((1-\sigma)\log M-B\frac{\log M}{\log^{\frac{1}{2}}t}\right)\end{eqnarray}\]
for any $M,N\geq L$ with $N\leq 2M\leq 2t^{\frac{1}{5}}.$ If $\eta(t)=\frac{c_6}{\log^{\frac{1}{2}}t}$, where $c_6=\frac{B}{2}$, say, the estimate becomes
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-s-it}&\ll& \exp\left(-c_8\frac{\log L}{\log^{\frac{1}{2}}t}\right)\ll 1,\end{eqnarray}\]
and the bound for the sum over $[L,t^{\frac{1}{5}}]$ is the same multiplied by $\log t$. Hence inequality $(1)$ with $\log(a+b)\leq \log a+\log b$ ($a,b\geq e$) gives
\[\begin{eqnarray}\log \zeta(s)&\ll& Ac_8\log^{\frac{1}{4}}t (\log \log t)^{\frac{1}{2}}+\log \log t\\&\ll& \log^{\frac{1}{4}}t (\log \log t)^{\frac{1}{2}},\end{eqnarray}\]
which was to be shown. ■

Bounds for $\zeta(s)$ imply a zero-free region

We start with a couple of standard estimates, which are proved here since the proofs are short. These estimates are used in many analytic proofs of the prime number theorem, and the third statement actually serves as a quantitative formulation of the non-vanishing of $\zeta(s)$ on the line $\sigma=1$.

Lemma 6. We have the following asymptotics and inequalities:
$(i)$ $\zeta(s)\sim\frac{1}{s-1},s\to 1$,
$(ii)$ $\frac{\zeta'(s)}{\zeta(s)}\sim\frac{1}{1-s},s\to 1$,
$(iii)$ $-3\Re(\frac{\zeta'(\sigma)}{\zeta(\sigma)})-4\Re\left(\frac{\zeta'(\sigma+it)}{\zeta(\sigma+it)}\right)-\Re\left(\frac{\zeta(\sigma+2it)}{\zeta(\sigma+2it)}\right)\geq 0,\,\,\sigma>1$.

Proof. $(i)$ For $\sigma>1$, we have
\[\begin{eqnarray}\frac{1}{\sigma-1}=\int_{1}^{\infty}x^{-\sigma}dx\leq \sum_{n=1}^{\infty}n^{-\sigma}\leq 2+\int_{2}^{\infty}x^{-\sigma}dx=2+\frac{2^{1-\sigma}}{\sigma-1}.\end{eqnarray}\]
Hence $\zeta(s)\sim \frac{1}{s-1}$ as $s\to 1+$, and by the meromorphicity of $\zeta$, this asymptotic holds more generally when $s\to 1$ in the complex plane.

$(ii)$ The previous asymptotic allows us to write $\zeta(s)=\frac{1}{s-1}+g(s)$ with $g$ analytic in a neighborhood of $1$. Therefore $\zeta'(s)=-\frac{1}{(s-1)^2}+g'(s)\sim -\frac{1}{(s-1)^2}$ as $s\to 1$. Now the result follows using part $(i)$.

$(iii)$ To prove this inequality, we use the miraculous identity $3+4\cos \alpha+\cos2\alpha=2(1+\cos \alpha)^2\geq 0$ and the Dirichlet series of $\frac{\zeta'(s)}{\zeta(s)}$ to discover that
\[\begin{eqnarray}&-&3\Re\left(\frac{\zeta'(\sigma)}{\zeta(\sigma)}\right)-4\Re\left(\frac{\zeta'(\sigma+it)}{\zeta(\sigma+it)}\right)-\Re\left(\frac{\zeta(\sigma+2it)}{\zeta(\sigma+2it)}\right)\\&=&\sum_{n=1}^{\infty}\frac{\Lambda(n)}{n^{\sigma}}\left(3+4\cos (t\log n)+\cos(2t\log n)\right)\geq 0\end{eqnarray}\]
which was to be shown. ■

Part $(iii)$ of the above lemma makes precise the fact that $\frac{\zeta'(s)}{\zeta(s)}$ cannot grow too fast near the line $\sigma=1$ (a very weak version of this assertion is of course that $\frac{\zeta'(s)}{\zeta(s)}$ is finite on that line). The following lemma of Landau tells us essentially that if $\zeta$ had zeros too close to the line $\sigma=1$, the function $\frac{\zeta'(s)}{\zeta(s)}$ would grow too fast in comparison with part $(iii)$ of Lemma 6.

Lemma 7 (Landau). Let $f:B(s_0,r)\to \mathbb{C}$ be analytic, and suppose $|f(s)|< e^M|f(s_0)|$ and that $f$ has no zeros in the right half ($\Re(s)>\Re(s_0)$) of $B(s_0,s)$. Then
\[\begin{eqnarray}-\Re\left(\frac{f'(s_0)}{f(s_0)}\right)<\frac{4M}{r},\end{eqnarray}\]
and if $f$ additionally has a zero $\rho_0$ on the segment $(s_0-\frac{1}{2}r,s_0)$, then
\[\begin{eqnarray}-\Re\left(\frac{f'(s_0)}{f(s_0)}\right)<\frac{4M}{r}-\frac{1}{|s_0-\rho_0|}.\end{eqnarray}\]
Proof. The statement is scaling invariant, so without loss of generality $f(s_0)=1$. Moreover, we can assume $s_0=0$ by considering $f(s-s_0)$. Let
\[\begin{eqnarray}g(s)=\frac{f(s)}{\prod_{\rho}\left(1-\frac{s}{\rho}\right)},\end{eqnarray}\]
the product being over the zeros of $f$ inside $B(0,r)$, counted with multiplicities. Their number is finite, since it is well-known that the zeros of a nonconstant analytic function cannot accumulate at a point. Taking the logarithmic derivative, we see that
\[\begin{eqnarray}\frac{g'(0)}{g(0)}=\frac{f'(0)}{f(0)}+\sum_{\rho}\frac{1}{\rho}\end{eqnarray}\]
with the same condition on the sum as there was for the product. By assumption, we have $\Re(\frac{1}{\rho})\leq 0$ and trivially $\Re\left(\frac{1}{\rho}\right)\leq |\frac{1}{\rho}|$, so both claims follow by taking the real part if we show $|\frac{g'(0)}{g(0)}|<\frac{4M}{r}$. We know that $g$ has no zeros in the ball $B(0,\frac{r}{2})$, so basic complex analysis tells that $g(s)=e^{G(s)}$, where $G$ is also analytic. Our condition becomes $\Re(G(s))<M$. If $G(s)=H_1(s)+iH_2(s)$ with $H_1,H_2$ real, then Cauchy's theorem applied to $M-G(s)$ gives
\[\begin{eqnarray}-G'(0)&=&\frac{1}{2\pi i}\int_{|s|=\frac{r}{2}}\frac{M-G(s)}{s^2}ds\\&=&\frac{1}{\pi r}\int_{0}^{2\pi}\left(M-G\left(\frac{r}{2}e^{i\theta}\right)\right)e^{-i\theta}d\theta\\&=&\frac{2}{\pi r}\int_{0}^{2\pi}\left(M-H_1\left(\frac{r}{2}e^{i\theta}\right)\right)e^{-i\theta}d\theta,\end{eqnarray}\]
due to
\[\begin{eqnarray}0=\int_{0}^{2\pi}(M-H_1(re^{i\theta})+iH_2(re^{i\theta}))e^{i\theta}d\theta,\end{eqnarray}\]
which is an application of Cauchy's theorem to $s(M-G(s))$. Hence from $M-H_1(re^{i\theta})\geq 0$ we infer
\[\begin{eqnarray}|G'(0)|&\leq& \frac{2}{\pi r}\int_{0}^{2\pi}|M-H_1(re^{i\theta})|d\theta\\&=&\frac{2}{\pi r}\int_{0}^{2\pi}(M-H_1(re^{i\theta}))d\theta\\&=&\frac{4(M-H_1(0))}{r}\\&=& \frac{4M}{r}.\\\end{eqnarray}\]
We have $\frac{g'(0)}{g(0)}=G'(0)$, so the proof is complete. ■

Applying the preceding theorem requires a bound for $\zeta(s)$, but we already have that. Next we show that any reasonable bound for $\zeta(s)$ near $\sigma=1$ excludes the possibility of zeros of the zeta function too close to that line.

Theorem 8 (Landau). Let $\varphi$ be an increasing and $\eta$ a decreasing function, both defined for $t\geq t_0$, such that
\[\begin{eqnarray}\zeta(s)\ll e^{\varphi(t)},\quad 1-\eta(t)\leq \sigma\leq 2,\,\,t\geq t_0.\end{eqnarray}\]
Further, let $0<\eta(t)\leq 1, 0\leq \varphi(t)\to \infty$ as $t\to \infty$, and $\frac{\varphi(t)}{\eta(t)}=o(e^{\varphi(t)})$. With these assumptions,
\[\begin{eqnarray}\zeta(s)\neq 0\quad \text{for}\,\, \sigma\geq 1-C\frac{\eta(2t+1)}{\varphi(2t+1)},\,\,t\geq t_1\end{eqnarray}\]
for some $C>0$.

Proof. Suppose for a contradiction that $\zeta(\beta+i\gamma)=0$ and that $\beta+i\gamma$ belongs to the claimed zero-free region with $\gamma>0$ (in particular, $\gamma$ is assumed to be large enough). Set $s_0=\sigma_0+i\gamma,s_1=\sigma_0+2i\gamma$ for some $\sigma_0\in (1,2)$ to be determined. We consider the ball $B(s_0,r)$, $r=\eta(2\gamma+1).$ By assumption, this ball lies in the domain $1-\eta(t)\leq \sigma\leq 1,t\geq t_0$ , and its right half contains no zeros of $\zeta$, because $\Re(s)>1$ there. By Lemma 6, we have $\frac{1}{2|s-1|}<|\zeta(s)|<\frac{2}{|s-1|}$ for $s$ in a sufficiently small ball $B(1,c_9)$, so that for $\sigma_0-1\geq e^{-\varphi(2\gamma+1)}$ we obtain the bound
\[\begin{eqnarray}\left|\frac{\zeta(s)}{\zeta(s_0)}\right|<4\frac {|s_0-1|}{|s-1|}<e^{c_{10}\varphi(2t+1)},\quad |s-s_0|\leq r.\end{eqnarray}\]
Hence from Lemma 7 it follows that
\[\begin{eqnarray}-\Re\left(\frac{\zeta'(s_0)}{\zeta(s_0)}\right)<\frac{c_{11}\varphi(2t+1)}{\eta(2t+1)}-\frac{1}{\sigma_0-\beta},\quad |s-s_0|\leq r\end{eqnarray}\]
if $\beta>\sigma_0-\frac{1}{2}r$. Also by Lemma 7,
\[\begin{eqnarray}-\Re\left(\frac{\zeta'(s_1)}{\zeta(s_1)}\right)<\frac{c_{11}\varphi(2t+1)}{\eta(2t+1)},\quad |s-s_0|\leq r.\end{eqnarray}\]
Now the estimates of Lemma 6 give, for $\sigma_0,\sigma_1>1$,
\[\begin{eqnarray}0&\leq& -3\left(\frac{\zeta'(\sigma_0)}{\zeta(\sigma_0)}\right)-4\Re\left(\frac{\zeta'(s_0)}{\zeta(s_0)}\right)-\Re\left(\frac{\zeta'(s_1)}{\zeta(s_1)}\right)\\&\leq&\frac{3\cdot 1.1}{\sigma_0-1}+\frac{5c_{11} \varphi(2t+1)}{\eta(2t+1)}-\frac{4}{\sigma_0-\beta},\end{eqnarray}\]
so that
\[\begin{eqnarray}1-\beta&\geq& \left(\frac{\frac{3.3}{4}}{\sigma_0-1}+\frac{5c_{11} \varphi(2t+1)}{4\eta(2t+1)}\right)^{-1}-(\sigma_0-1)\\&=&\frac{1-\frac{3.3}{4}-(\sigma_0-1)\cdot \frac{5c_{11}\varphi(2t+1)}{4\eta(2t+1)}}{\frac{3.3}{4(\sigma_0-1)}+\frac{5c_{11}\varphi(2t+1)}{\eta(2t+1)}}.\end{eqnarray}\]
Taking $\sigma_0=1+\frac{c_{12}\eta(2\gamma+1)}{\varphi(2\gamma+1)}$ for a small enough $c_{12}>0$, the earlier requirement $\sigma_0-1\geq e^{-\varphi(2\gamma+1)}$ is fulfilled for large enough $\gamma$, since $\frac{\varphi(t)}{\eta(t)}=o(e^{\varphi(t)})$ by assumption. Now for large values of $\gamma$,
\[\begin{eqnarray}1-\beta&\geq& \frac{\frac{1}{100}}{\left(\frac{3.3c_{12}}{4}+\frac{5c_{11}}{4}\right)\frac{\varphi(2t+1)}{\eta(2t+1)}},\end{eqnarray}\]
so $\beta+i\gamma$ actually lies outside the region wher it was supposed to lie, if the constant $C$ in the theorem is large enough. This is a contradiction, so it remains to consider the case $\sigma_0-\beta\geq \frac{r}{2}$, but in that case $1-\beta\geq \frac{1}{2}\eta(2t+1)-\frac{c_{12}\eta(2t+1)}{\varphi(2t+1)}>\left(\frac{1}{2}-c_{12}\right)\frac{\eta(2t+1)}{\varphi(2t+1)}$, which is again impossible.■

We finally obtain the desired zero-free region for $\zeta(s)$.

Theorem 9. We have
\[\begin{eqnarray}\zeta(s)\neq 0\quad \text{for}\,\,\, \sigma\geq 1-\frac{c}{\log^{\frac{3}{4}}t(\log \log t)^{\frac{1}{2}}}.\end{eqnarray}\]
for some $c>0$.

Proof. Take $\eta(t)=\frac{c_6}{\log^{\frac{1}{2}}t}$, $\varphi(t)=c_7\log^{\frac{1}{4}}t(\log \log t)^{\frac{1}{2}}$, and the claim follows from the previous theorems. ■

We are now in a position to bound the error term in the prime number theorem. We consider Chebychev's prime counting function
\[\begin{eqnarray}\psi(x):=\sum_{p^{\alpha}\leq x}\log p,\end{eqnarray}\]
(the sum is over all prime powers $p^{\alpha}$ up to $x$) instead of $\pi(x)$, since its behavior is much nicer than that of $\pi(x)$ form an analytic point of view. It is well-known that bounding $|\psi(x)-x|$ from above is essentially equivalent to bounding $|\pi(x)-Li(x)|$, but we state this as a lemma.

Lemma 10. Suppose $\psi(x)=x+O(E(x))$, where $E(x)\gg \sqrt{x}$. Then $\pi(x)=Li(x)+O\left(\frac{E(x)}{\log x}\right)$.

Proof. We define
\[\begin{eqnarray}\theta(x):=\sum_{p\leq x}\log p,\end{eqnarray}\]
and notice that
\[\begin{eqnarray}|\psi(x)-\theta(x)|&\leq& \sum_{p\leq \sqrt{x}}\log p+\sum_{p\leq x^{\frac{1}{3}}}\log p+...\\&\ll& \sqrt{x}+x^{\frac{1}{3}}\log x\ll \sqrt{x},\end{eqnarray}\]
so we also have $\theta(x)=x+O(E(x)).$ Now partial summation yields
\[\begin{eqnarray}\pi(x)&=&\sum_{p\leq x}(\log p)\frac{1}{\log p}\\&=&\frac{\psi(x)}{\log x}+\int_{2}^{x}\frac{\psi(t)}{t\log^2 t}dt\\&=&\frac{x}{\log x}+\int_{2}^{t}\frac{dt}{\log^2 t}+O\left(\frac{E(x)}{\log x}\right)\\&=&Li(x)+O\left(\frac{E(x)}{\log x}\right),\end{eqnarray}\]
where the last line was the result of partial integration. ■

The lemma above implies that the following theorem finishes the proof of the error term for the prime number theorem.

Theorem 11. We have $\psi(x)=x+O(x\exp(-c'(\log x)^{\frac{4}{7}}))$ for any $c'>0$.

Proof. We use the well-known explicit formula
\[\begin{eqnarray}\psi(x)=x-\sum_{\Im(\rho)\leq T}\frac{x^{\rho}}{\rho}+O\left(\frac{x\log^2 x}{T}\right)\end{eqnarray}\]
for $1\leq T\leq x$ (for a proof, see for example Davenport's Multiplicative Number Theory), where $\rho=\beta+i\gamma$ runs over the zeros of $\zeta(s)$. If we define $\eta(t)=\frac{c}{(\log^{\frac{3}{4}}t)(\log \log t)^{\frac{1}{2}}}$, then $\Re(\rho)\leq 1-\eta(T)$ for every zero occurring in the sum, and therefore
\[\begin{eqnarray}|\psi(x)-x|&\ll& \sum_{|\gamma|\leq T}\frac{x^{1-\theta(T)}}{|\gamma|}+\frac{x\log^2 x}{T}\\&\ll& x^{1-\eta(T)}\log^2 T+\frac{x\log^2 x}{T}\\&\ll& \log^2 x(x^{1-\eta(T)}+\frac{x}{T})\end{eqnarray}\]
by the standard estimate
\[\begin{eqnarray}\sum_{|\gamma\leq T|}\frac{1}{|\gamma|}\ll \log^2 T\end{eqnarray}\]
for the reciprocals of the zeros of the zeta function (which can also be found in Davenport's book). An optimal choice of $T$ is given by the equation $x^{1-\eta(T)}=\frac{x}{T}$, which reduces to $-\eta(T)\log x=\log T$. Plugging in the definition of $\eta(T)$, we see that $T=\exp\left(c^{-1}(\log x)^{\frac{4}{7}}(\log \log x)^{\frac{2}{7}}\right)$ is essentially the optimal value, and then the error term is $\ll x\exp\left(c^{-1}(\log x)^{\frac{4}{7}}(\log \log x)^{\frac{2}{7}}\right),$ which is a bit better than what we stated. ■