*The Theory of the Riemann Zeta Function*and the book

*Van der Corput's Method of Exponential Sums*by S. W. Graham and G. Kolesnik. The bounds depending on the derivatives enable us to estimate long zeta sums and hence complete the proof of the error term in the prime number theorem from the previous post. As a byproduct, we also get the Hardy-Littlewood bound

\[\begin{eqnarray}\zeta\left(\frac{1}{2}+it\right)\ll t^{\frac{1}{6}}\log t,\end{eqnarray}\]

for the zeta function on the critical line, which will be utilized in the next post in proving Ingham's theorem on primes in short intervals. Two more consequences of the Weyl differencing method, which will be applied to bounding exponential sums based on their derivatives, are the van der Corput inequality and a related equidistribution test. This test easily gives the uniform distribution of the fractional parts of polynomials, as we shall see.

**First, second and third derivative tests**
We want to estimate an exponential sum

\[\begin{eqnarray}\sum_{M\leq n\leq N}e(F(n))\end{eqnarray}\]

with $N\leq 2M$, given information on the derivatives of the phase $F$. It turns out that best bounds are obtained when the length of the sum is comparable to $\lambda^{-1}$, where $\lambda$ is chosen in such a way that $F',F''$ or $F^{(3)}$ is of order $\lambda$, in which case the upper bound is comparable to a negative power of $\lambda.$ The first derivative test is used to prove the second derivative test and the second derivative test is used for the third derivative test, but none of them makes the others obsolete, as they are useful in different ranges of $M$. We start with a test for the first derivative, due to Kusmin and Landau.

**Lemma (First derivative test).**Let $F$ be a continuously differentiable function on $[a,b]$ such that $F'$ is monotonic and $\|F'(x)\|\geq \lambda$ on the same interval. We have

\[\begin{eqnarray}\sum_{a\leq n\leq b}e(F(n))\ll \lambda^{-1},\end{eqnarray}\]

where the implied constant is absolute.

**Proof.**Changing the sign of $F$, if necessary, we can say that $F'$ is increasing. Moreover, considering the function $F(x)+mx$ for a suitable integer $m$, we may assume that $\lambda\leq F'(x)\leq 1-\lambda$. The key to the proof is the identity

\[\begin{eqnarray}e(F(n))=\frac{e(F(n))-e(F(n+1))}{1-e(g(n))}=A_n(e(F(n+1)-F(n))),\end{eqnarray}\]

where $g(x)=F(x+1)-F(x)$ and $A_n=\frac{1+i\cot(\pi g(n))}{2}$. By partial summation,

\[\begin{eqnarray}&&\sum_{a\leq n\leq b}e(F(n))\\&=&\sum_{a+1\leq n\leq b-1}e(F(n))(A_n-A_{n-1})+e(F(a+1))A_{a+1}+e(F(b))(1-A_{b-1}),\end{eqnarray}\]

so that

\[\begin{eqnarray}&&\left|\sum_{a+1\leq n\leq b-1}e(F(n))\right|\\&\leq& \sum_{a+1\leq n\leq b-1}\frac{1}{2}|\cot(\pi g(n))-\cot(\pi g(n-1))|+|A_{a+1}|+|1-A_{b-1}|.\end{eqnarray}\]

The function $g$ is increasing, because $g'(x)=F'(x+1)-F'(x)>0$, and also $g(n)=F'(\xi_n)\in [\lambda,1-\lambda]$ for some $\xi_n\in [a,b]$ by the intermediate value theorem. Hence $\cot(\pi g(n))\leq\cot(\pi g(n-1))$ and we can evaluate a telescopic sum:

\[\begin{eqnarray}&&\left|\sum_{a+1\leq n\leq b-1}e(F(n))\right|\\&\leq& \frac{\cot(\pi(a+1))-\cot(\pi(b-1))}{2}+1+|\cot(\pi(a+1))|+|\cot(\pi(b-1))|\ll \lambda^{-1},\end{eqnarray}\]

because $\cot(\pi x)\ll \frac{1}{\|x\|}$ and $1\leq \lambda^{-1}$. In fact, we could take for instance $5$ as the constant in the inequality. ■

We can now proceed to the second derivative test.

**Theorem (Second derivative test).**Let $F$ be a real-valued, twice continuously differentiable function on an interval $I$ with $\lambda\leq |F''(x)|\leq\alpha \lambda$. Then

\[\begin{eqnarray}\sum_{n\in I}e(F(n))\ll \alpha|I|\lambda^{\frac{1}{2}}+\lambda^{-\frac{1}{2}},\end{eqnarray}\]

where the implied constant is absolute.

**Proof.**By assumption, $F''$ has constant sign, implying that $F'$ is monotonic. We use an optimal combination of the first derivative test with the trivial estimate. Fix a number $0<\delta<\frac{1}{2}$, which is to be optimized. For any interval $J_i\subset I$ such that $\|F'(x)\|\geq \delta$ for $x\in J_i$, we have

\[\begin{eqnarray}\left|\sum_{n\in J_i}e(F(n))\right|\leq c_1\delta^{-1},\end{eqnarray}\]

while for any other interval $J'_i\subset I$ we estimate

\[\begin{eqnarray}\left|\sum_{n\in J'_i}e(F(n))\right|\leq |J'_i|\leq \frac{2\delta}{\lambda},\end{eqnarray}\]

because if $J'_i=[a,b]$ is any such interval, then $2\delta\geq F'(b)-F'(a)\geq |J'_i|\lambda$ by the intermediate value theorem.

The whole interval $I$ can be partitioned into intervals $J_i$ and $J_i'$ in such a way that there are at most $\alpha |I| \lambda+2$ intervals $J_i$ and at most $\alpha|I|\lambda+1$ intervals $J_i'$. This is so because $|F'(x)-F'(y)|\leq \alpha |I|\lambda$ for any $x,y\in I$. We conclude

\[\begin{eqnarray}\left|\sum_{n\in I}e(F(n))\right|\leq \alpha |I| \lambda\left(c_1\delta^{-1}+\frac{2\delta}{\lambda}\right)+2c_1\delta^{-1}+\frac{2\delta}{\lambda}.\end{eqnarray}\]

The choice $\delta=\lambda^{\frac{1}{2}}$ is optimal up to a constant factor, and gives

\[\begin{eqnarray}\left|\sum_{n\in I}e(F(n))\right|\ll \alpha|I|\lambda^{\frac{1}{2}}+\lambda^{-\frac{1}{2}},\end{eqnarray}\]

so the theorem is proved. ■

In order to derive a third derivative test, we need more tools. It was already seen in proving Vinogradov's estimate for exponential sums with slowly growing phase, and in bounding Weyl sums in earlier posts, that it is often easier to bound an average of exponential sums involving the differenced function $F(x)-F(x-h)$ as the phase function, compared to a direct estimation of the exponential sum with phase $F$. This is so because the differenced phase should grow more slowly. The following inequality of van der Corput offers a general device for bounding exponential sums by their differenced versions. We remark that van der Corput has also many other lemmas on exponential sums and integrals, including his processes A and B for transforming sums into a different (hopefully easier) form. The following theorem is in fact van der Corput's process $A$.

**Theorem (Van der Corput's inequality).**Let $f$ be any complex-valued function supported on an interval $I$ of real numbers. Then for any parameter $0<H\leq |I|$, we have

\[\begin{eqnarray}\left|\sum_{n\in I}f(n)\right|^2\leq \frac{2|I|}{H}\sum_{|h|<H}\left(1-\frac{|h|}{H}\right)\left|\sum_{n\in I} f(n)\overline{f(n-h)}\right|.\end{eqnarray}\]

**Proof.**We assume first that $H$ is an integer. We have

\[\begin{eqnarray}H\sum_{n}f(n)=\sum_{k=1}^H \sum_{n}f(n+k)=\sum_n \sum_{k=1}^H f(n+k).\end{eqnarray}\]

The outer sum over $n$ in the rightmost expression has at most $|I|+H$ nonzero terms, so the Cauchy-Schwarz inequality tells that

\[\begin{eqnarray}\left|\sum_n f(n)\right|^2&\leq& \frac{|I|+H}{H^2}\sum_{n}\left|\sum_{k=1}^{H}f(n+k)\right|^2\\&=&\frac{|I|+H}{H^2}\sum_{n}\sum_{k=1}^{H}\sum_{\ell=1}^{H}f(n+k)\overline{f(n+\ell)}.\end{eqnarray}\]

We make the change of variables $m=n+k$, $h=\ell-k$, giving $|h|<H$, and observe that the last expression equals

\[\begin{eqnarray}&&\frac{|I|+H}{H^2}\sum_{m}\sum_{|h|<H}(H-|h|)f(m)f(m-h)\\&=&\frac{|I|+H}{H}\sum_{|h|<H}\sum_{m}\left(1-\frac{|h|}{H}\right)f(m)f(m-h),\end{eqnarray}\]

and now the statement follows from the assumption $H\leq |I|$ and triangle inequality.

Now consider the case where $H$ is not an integer. Setting $H'=\lfloor H\rfloor+1$ and applying the inequality with the parameter $H'$, we obtain

\[\begin{eqnarray}\left|\sum_n f(n)\right|^2&\leq& \frac{2|I|}{H'}\sum_{|h|<H'}\left(1-\frac{|h|}{H'}\right)\sum_n f(n)\overline{f(n-h)}\\&\leq& \frac{2|I|}{H}\sum_{|h|<H}\left(1-\frac{|h|}{H}\right)\left|\sum_n f(n)\overline{f(n-h)}\right|\end{eqnarray}\]

because $H'\leq H\leq H'+1$. ■

Van der Corput's inequality has many consequences, which we leave to the end of the post. At this point, we proceed to the third derivative test.

**Theorem (Third derivative test).**Let $F$ be a three times continuously differentiable real-valued function on an interval $I$. Suppose $\lambda\leq|F^{(3)}(x)|\leq \alpha \lambda$ for all $x\in I$ and some $\alpha$ and $\lambda$. Then

\[\begin{eqnarray}\sum_{n\in I}e(F(n))\ll |I|\lambda^{\frac{1}{6}}\alpha^{\frac{1}{2}}+|I|^{\frac{1}{2}}\lambda^{-\frac{1}{6}},\end{eqnarray}\]

the implied constant being absolute.

**Proof.**We may assume that $F^{(3)}(x)\geq 0$ by conjugating the sum if necessary. We utilize van der Corput's inequality to write

\[\begin{eqnarray}\left|\sum_{n\in I}e(F(n))\right|^2\ll \frac{|I|^2}{H}+\frac{|I|}{H}\sum_{1<h<H}\left(1-\frac{|h|}{H}\right)\left|\sum_n e(F(n)-F(n-h))\right|;\end{eqnarray}\]

the sum is only over positive values of $h$ because the terms corresponding to $h$ and $-h$ are equal. Set $g(x)=F(x)-F(x-h);$ we have $g''(x)=F''(x)-F''(x-h)=hF^{(3)}(x)\in [h\lambda,h\alpha\lambda]$. Therefore the second derivative test gives

\[\begin{eqnarray}\left|\sum_{n\in I}e(F(n))\right|^2&\ll& \frac{|I|^2}{H}+\frac{|I|}{H}\sum_{1<h<H}\alpha (|I|(h\lambda)^{\frac{1}{2}}+(h\lambda)^{-\frac{1}{2}})\\&\ll& |I|^2H^{-1}+|I|^2H^{\frac{1}{2}}\alpha\lambda^{\frac{1}{2}}+|I|H^{-\frac{1}{2}}\lambda^{-\frac{1}{2}}.\end{eqnarray}\]

The first term is rather harmless, as it is always smaller than the third term, provided $\lambda\leq 1$ (in the case $\lambda>1$, the third derivative test loses to the trivial bound). Therefore we choose $H=\lambda^{\gamma}$, where $\gamma$ is chosen so that the last two terms have the same power of $\lambda$; this happens for $H=\lambda^{-\frac{1}{3}}$ (we assume for now that $\lambda^{-\frac{1}{3}}\leq |I|$). We deduce

\[\begin{eqnarray}\left|\sum_{n\in I}e(F(n))\right|^2&\ll& |I|\lambda^{-\frac{1}{3}}+\alpha|I|^2\lambda^{\frac{1}{3}}.\end{eqnarray}\]

Taking square roots, we get the desired bound if $\lambda^{-\frac{1}{3}}\leq |I|$. In the opposite case, the trivial estimate is stronger than the inequality we are proving, since then $|I|\ll |I|^{\frac{1}{2}}\lambda^{-\frac{1}{6}}$. ■

**Applications to the $\zeta$ function**
We are finally in a position to bound long zeta sums via the following theorem.

**Theorem.**The estimate

\[\begin{eqnarray}\sum_{M\leq n\leq 2M}n^{-it}\ll M^{1-10^{-4}}.\end{eqnarray}\]

holds for any $M\in [t^{\frac{1}{5},t^2}]$, with an absolute implied constant. As was noted in the previous post, this implies

\[\begin{eqnarray}\zeta(s)=\sum_{n\leq t^{\frac{1}{5}}}n^{-it}+O(1),\end{eqnarray}\]

uniformly for $\sigma \in [1-10^{-4},2]$.

**Proof.**We consider various ranges of $M$ separately, as different derivtives have suitable size in these intervals.

(i) $t^{\frac{1}{3}+\delta}\leq M\leq t^2$,

(ii) $t^{\frac{1}{3}-\delta}\leq M\leq t^{\frac{1}{3}+\delta}$,

(iii) $t^{\frac{1}{5}}\leq M\leq t^{\frac{1}{3}-\delta}$.\

Here $\delta=\frac{1}{100}$.

(i) In this case, we apply the third derivative test, the function being $F(x)=-\frac{t\log x}{2\pi},$ of course. We have $F^{(3)}(x)=\frac{-t}{\pi x^3}$, so that the third derivative test produces the estimate

\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll M\left(\frac{t}{M^3}\right)^{\frac{1}{6}}+M^{\frac{1}{2}}\left(\frac{t}{M^3}\right)^{-\frac{1}{6}}=M^{\frac{1}{2}}t^{\frac{1}{6}}+Mt^{-\frac{1}{6}}\ll M^{\frac{1}{2}+\frac{1}{2+6\delta}}\end{eqnarray}\]

As a side note, we mentioned earlier that the third derivative test does not make the first and second derivative tests useless, and indeed, we would have obtained here better estimates in certain ranges with the other tests. However, we are satisfied with a saving of a very minimalistic power of $t$. We have $\frac{1}{2}+\frac{1}{2+6\delta}<1-10^{-4}$, so in this case we are done.

(ii) In this range, we use the case $k=3$ of Vinogradov's theorem, which is Theorem 5 (ii) in this post. We have $\frac{F^{(4)}(x)}{4!}=\frac{t}{8\pi x^4}$. Divide the interval $[M,N]$ into at most $2^5$ parts of length $\gg N-M$ such that $F^{(4)}(x)$ varies by at most a factor of two on each of these intervals. It suffices to estimate the contribution of a single such interval $[a,b]\subset [M,N]$. Let $\lambda=\frac{t}{8\pi b^4}$ so that $\frac{F^{(4)}(x)}{4!}\in [\lambda,2\lambda]$ on $[a,b]$. We separate two subcases.

First let $\lambda^{-1}\leq b-a$. Then cover the interval $[a,b]$ with $\ll(N-M)\lambda$ intervals of length $\lambda^{-1}$. If $[c,d]$ is any of the covering intervals, the conditions of Vinogradov's theorem are satisfied (as the sum has length $\lambda^{-1}$), so

\[\begin{eqnarray}\sum_{c\leq n\leq d}n^{-it}\ll \lambda^{-1\times (1-\frac{1}{100\cdot 3^2\log 3})}.\end{eqnarray}\]

Now the sum over the interval $[a,b]$ is at most a constant times

\[\begin{eqnarray}(b-a)\lambda^{\frac{1}{100\cdot 3^2\cdot \log 3}}\ll M\cdot \left(\frac{t}{M^4}\right)^{\frac{1}{100\cdot 3^2\cdot \log 3}},\end{eqnarray}\]

because $[a,b]\subset [M,N]$ implies $b\asymp M$ and $\lambda\asymp \frac{t}{M^4}$. For $t^{\frac{1}{3}-\delta}\leq M$, this is

\[\begin{eqnarray}\ll M^{1-\frac{1}{2}\cdot\frac{1}{100\cdot 3^2\cdot \log 3}}< M^{1-10^{-4}}.\end{eqnarray}\]

Still in the case (ii), assume now that $\lambda^{-1}\geq b-a$ holds. Then if $\lambda^{-0.9}\leq b-a$, we may apply Theorem 5 (ii) to deduce

\[\begin{eqnarray}\sum_{a\leq n\leq b}n^{-it}\ll (b-a)^{1-\frac{1}{100\cdot 3^2\cdot\log 3}}\leq M^{1-10^{-4}}\end{eqnarray}\]

Otherwise, we apply the trivial estimate to deduce

\[\begin{eqnarray}\sum_{a\leq n\leq b}n^{-it}\ll b-a\leq \lambda^{-0.9}\ll \left(\frac{M^4}{t}\right)^{0.9}\ll M^{1-10^{-4}}\end{eqnarray}\]

when $M\leq t^{\frac{1}{3}+\delta}$.

(iii) In the final case, we take $k=6$ in Vinogradov's theorem. Recall that in Theorem 5 (i) the condition is that the length of the sum is in the range $[\lambda^{-\frac{1}{4}}, \lambda^{-1}]$, which is milder than in Theorem 5 (ii). Again we partition the sum over $[M,N]$ into a bounded number of sums such that $|\frac{F^{(7)}(x)}{7!}|$ belongs to $[\lambda,2\lambda]$ on any such interval $[a,b]$, with $\lambda=\frac{t}{14\pi b^4}$. If $\lambda^{-1}\geq b-a$, the intervals $[a,b]$ can be covered with $\ll \lambda(N-M)$ intervals $[c,d]$ of length $\lambda^{-1}$. Therefore, similarly as before,

\[\begin{eqnarray}\sum_{a\leq n\leq b}n^{-it}\ll M\left(\frac{t}{M^7}\right)^{\frac{1}{100\cdot 6^2\log 6}}<M^{1-10^{-4}}.\end{eqnarray}\]

for $t^{\frac{1}{5}}\leq M$.

We may hence assume $\lambda^{-1}\geq b-a,$ and if $\lambda^{-\frac{1}{4}}\geq b-a$ happens to hold, we get

\[\begin{eqnarray}\sum_{a\leq n\leq b}n^{-it}\ll \lambda^{-\frac{1}{4}}\ll \left(\frac{M^7}{t}\right)^{\frac{1}{4}}\ll M^{\frac{7-\frac{1}{\frac{1}{3}-\delta}}{4}}\ll M^{1-10^{-4}},\end{eqnarray}\]

since $M\leq t^{\frac{1}{3}-\delta}$. We are left with the case $b-a\in [\lambda^{-\frac{1}{4}},\lambda^{-1}]$, where Theorem (ii) applies:

\[\begin{eqnarray}\sum_{a\leq n\leq b}n^{-it}\ll (b-a)^{1-\frac{1}{100\cdot 6^2\cdot\log 6}}< M^{1-\frac{1}{6451}}<M^{1-10^{-4}}.\end{eqnarray}\]

This concludes the proof. ■

We present another application of the derivative tests, namely the Hardy-Littlewood bound for the Riemann zeta function on the critical strip. The exponent $\frac{1}{6}$ in that bound is in fact very good, as it has been improved numerous times over the years, but only by $0.0118$ in total. The best exponent nowadays is Bourgain's $0.1549.$ What is expected to be true is the Lindelöf hypothesis, asserting that the exponent can be taken to be any positive number. One can show that the Lindelöf hypothesis would follow from the Riemann hypothesis, but the Riemann hypothesis gives even tighter bounds fro the growth of $\zeta$, also outside the critical line.

**Theorem.**We have

\[\begin{eqnarray}\zeta\left(\frac{1}{2}+it\right)\ll t^{\frac{1}{6}}\log t.\end{eqnarray}\]

**Proof.**We know that

\[\begin{eqnarray}\zeta\left(\frac{1}{2}+it\right)=\sum_{n\leq t^2}n^{-\frac{1}{2}-it}+O(1).\end{eqnarray}\]

For $F(x)=-\frac{t\log x}{2\pi},$ we have $F^{(3)}(x)=\frac{-t}{\pi x^3}$, so that the third derivative test produces the estimate

\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll M\left(\frac{t}{M^3}\right)^{\frac{1}{6}}+M^{\frac{1}{2}}\left(\frac{t}{M^3}\right)^{-\frac{1}{6}}=M^{\frac{1}{2}}t^{\frac{1}{6}}+Mt^{-\frac{1}{6}}\end{eqnarray}\]

for any $M,N$ such that $N\leq 2M$. Now partial summation (or more precisely Lemma 2 of this post) gives

\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-\frac{1}{2}-it}\ll t^{\frac{1}{6}}+M^{\frac{1}{2}}t^{-\frac{1}{6}}\ll t^{\frac{1}{6}}\end{eqnarray}\]

for $M\leq t^{\frac{2}{3}}.$ Furthermore, the second derivative test yields

\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll t^{\frac{1}{2}}+Mt^{-\frac{1}{2}},\end{eqnarray}\]

so that partial summation gives

\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-\frac{1}{2}-it}\ll t^{\frac{1}{2}}M^{-\frac{1}{2}}+M^{\frac{1}{2}}t^{-\frac{1}{2}}\ll t^{\frac{1}{6}}\end{eqnarray}\]

for $t^{\frac{2}{3}}\leq M\leq t$. Finally, for $M\geq t$ the first derivative test gives

\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}\ll \frac{M}{t}\ll M^{\frac{1}{2}}\end{eqnarray}\]

for $M\leq t^2$. Partial summing once more, we see that these sums contribute $\ll 1$. By dividing the interval $[1,t^2]$ into dyadic blocks, we have

\[\begin{eqnarray}\zeta\left(\frac{1}{2}+it\right)\ll(\log t)t^{\frac{1}{6}},\end{eqnarray}\]

as wanted. ■

**Equidistribution of polynomials modulo $1$**
We turn to some other corollaries of van der Corput's inequality. We see that it has immediate consequences in the theory of equidistribution.

**Theorem (Van der Corput's equidistribution test).**Let $(x_n)_{n=0}^{\infty}$ be a sequence of numbers on $[0,1]$. If the sequences $(x_{n+h}-x_n)_{n=0}^{\infty}$ are uniformly distributed on $[0,1]$ for every positive integer $h$, then $(x_n)_{n=1}^{\infty}$ is also uniformly distributed.

**Proof.**We are going to use Weyl's equidistribution theorem from this post. Fix $\varepsilon\in (0,1)$ and a positive integer $k$. Take $H=\varepsilon^{-1}, f(n)=e(kx_n)1_{[1,M]}$ in van der Corput's inequality to see that

\[\begin{eqnarray}&&\left|\sum_{n\leq M}e(kx_n)\right|\\&\leq& \sqrt{2\left(\frac{M+\varepsilon^{-1}}{\varepsilon^{-1}}\right)}\left(\sum_{|h|<\varepsilon^{-1}}\left(1-\frac{|h|}{H}\right)\left|\sum_{h\leq n\leq M-h}e(k(x_n-x_{n-h}))\right|+\varepsilon^{-2}\right)^{\frac{1}{2}},\end{eqnarray}\]

since $f(n)\overline{f(n-h)}=e(k(x_n-x_{n-h}))$ for $h\leq n\leq M-h$ (but not for $M-h<n\leq M$). This is further dominated by

\[\begin{eqnarray}2\sqrt{2\left(\frac{M+\varepsilon^{-1}}{\varepsilon^{-1}}\right)}\left(M+\sum_{0<h<\varepsilon^{-1}}\left|\sum_{h\leq n\leq M-h}e(k(x_n-x_{n-h}))\right|+\varepsilon^{-2}\right)^{\frac{1}{2}},\end{eqnarray}\]

because the terms corresponding to $h$ and $-h$ are equal. As $\varepsilon$ and $k$ are fixed, we may choose $M$ to be so large that the sum of $e(k(x_n-x_{n-h}))$ is bounded by $M\varepsilon^2$ for each $h\in [1,\varepsilon^{-1}]$. Hence, for large enough $M$,

\[\begin{eqnarray}\left|\sum_{n\leq M}e(kx_n)\right|\leq 2\sqrt{2(M\varepsilon+1)}\sqrt{M+M\varepsilon+\varepsilon^{-2}}.\end{eqnarray}\]

Dividing both sides by $M$ and letting $\varepsilon\to 0$ finishes the proof in view of Weyl's equidistribution theorem. ■

Using this test for equidistribution, we can easily prove Weyl's result on the uniform distribution of the fractional parts $\{P(n)\},n=1,2,...$, where $P$ is a polynomial.

**Theorem (Weyl).**Let $P$ be a nonconstant polynomial having at least one irrational coefficient which is not the constant term. Then the sequence$\{P(n)\},n=1,2,...$ is uniformly distributed on $[0,1]$.

**Proof.**Let $d$ be the degree of the polynomial. We induct on $d$, the case $d=1$ being already proved in this post (just by summing a geometric series). Suppose the claim holds for all polynomials with degree at most $d-1$. If the leading coefficient of $P$ is irrational, then $\{P(n+h)-P(n)\}$ is equidistributed for everyl $h$ and has degree less than $d$, so the statement follows form van der Corput's equidistribution test. On the other hand, if $P(x)=\frac{a}{b}x^d+Q(x)$ for some rational number $\frac{a}{b}$ and a polynomial $Q$ of degree less than $d$, then

\[\begin{eqnarray}\sum_{n\leq N}e(kP(n))&=&\sum_{m=0}^{b-1}\sum_{n\leq N\atop n\equiv m \hspace{-0.1cm}\pmod b}e(k(Q(n)+\frac{a}{b}n^d))\\&\ll& \sum_{m=0}^{b-1}\sum_{n\leq \frac{N-m}{b}}e(k(Q(bn+m)+\frac{a}{b}(bn+m)^d)),\end{eqnarray}\]

as $n^d\equiv m^d\pmod b$. We apply the induction hypothesis and Weyl's criterion to the polynomial $Q(bn+m)+\frac{a}{b}(bm+n)^d$, which has an irrational coefficient, and this completes the induction. ■