Thursday, April 16, 2015

Schnirelmann density and Waring's problem

A central question in additive number theory is determining whether a given infinite set $A$ of positive integers is an asymptotic basis of finite order, meaning that every large enough integer is the sum of at most $C$ elements of $A$ for some constant $C$. Questions of this type of course cover the Goldbach conjectures, Waring's problem on representing numbers as sums of perfect $k$th powers, and Romanoff's theorem on writing an integer as the sum of $C$ primes and $C$ powers of two. A natural question is whether there is some relatively easily verifiable property of sets such that when this property holds, a set is an asymptotic basis. A natural candidate would be the positivity of the asymptotic density of the set, denoted by $d(A)$. This criterion could be used, but the asymptotic density fails to exist for some rather elementary sets, and even when it does exist, proving the existence can be hard for non-trivial sets. In 1930, L.G.Schnirelmann was able to define the right kind of density, called the Schnirelmann density and denoted by $\sigma(A)$, such that it always exists and any set with positive Schnirelmann density is an asymptotic basis of finite order. He went on to prove that every integer is the sum of $C$ primes for some $C$, and later Linnik used Schnirelmann's method to give a relatively simple solution to Waring's problem. It is also notable that Schnirelmann's and Linnik's proofs are quite elementary; the former one uses nothing more complicated than Brun's sieve (see this post), and the latter uses nothing more complicated than basic estimates for Weyl sums (see this post). We will prove in this post Schnirelmann's and Linnik's theorems, as well as Romanoff's theorem. We follow the book An Introduction to Sieve Methods and their Applications by Cojocaru and Murty and the book Not Always Buried Deep: A Second Course in Elementary Number Theory by Pollack.

Schnirelmann density

The asymptotic density of a set $A$ of positive integers is defined by $d(A)=\lim_{n\to \infty} \frac{A(n)}{n}$ when the limit exists, with $A(n)=|A\cap [1,n]|$ and $|\cdot|$ denoting the size of a set. This is a very natural density indeed, but the following example shows that it does not exists even for some sets that are not very pathological.

Theorem 1. Let $A$ be the set of all positive integers whose binary expansion has even length. Then $d(A)$ does not exists.

Proof. Let $k$ be fixed. Up to $2^{2k+1}$ the set $A$ contains
numbers, so $\frac{A(2^{2k+1})}{2^{2k+1}}=\frac{1}{3}+o(1)$. Up to $2^{2k}$ the set has an equal number of elements, so $\frac{A(2^{2k})}{2^{2k}}=\frac{2}{3}+o(1)$. Hence the asymptotic density does not exist. ■

The above example might not be the typical case of a set we want to study (after all, $A$ is trivially an asymptotic basis), but there is a more serious problem which requires the introduction of a new density. Namely, a typical set to consider could be the set $A=\mathbb{P}+\mathbb{P}$, where $\mathbb{P}$ is the set of primes and $B+C$ denotes sumset of two sets, that is, the set of all sums. Using the methods presented in this post, we are only able to show that $A$ has a positive lower asymptotic density (where the limit is replaced with $\liminf$), but not that the upper asymptotic density is the same. They can both be shown to be equal to $\frac{1}{2}$, but this amounts to showing that almost all even numbers are sums of two primes and is more complicated than what we plan to do here (for that one could either use the large sieve or an approach similar to proving Vinogradov's three primes theorem).

Luckily, there is no need for considering the asymptotic density, and we can just use the density
\[\begin{eqnarray}\sigma(A)=\inf_{n\geq 1}\frac{A(n)}{n}\end{eqnarray}\]
introduced by Schnirelmann. Clearly $\sigma(A)\leq d(A)$. It may be that $A$ does not contain $1$, in which case $\sigma(A)=0$, but if $A$ has a positive lower asymptotic density, then $\sigma(A\cup\{1\})$ has positive Schnirelmann density, and if we show that $A\cup\{1\}$ is an asymptotic basis, it is usually easy to show that $A$ is also. One could alternatively replace the infimum in the definition of the Schnirelmann desnity by $\liminf$ and the theorems presented below would continue to hold, but using the infinimum makes everything explicit, and this way Schnirelmann was able to show that every integer is the sum of at most $800 000$ primes. We will nevertheless leave these numerical calculations aside. The following theorem, based on elementary combinatorics, shows the importance of the Schnirelmann density.

Theorem 2 (Schnirelmann). Let $A$ be a set of natural numbers with $0\in A$ and $\sigma(A)>0$. Then $A$ is an asymptotic basis of finite order. In addition, if $\sigma(A)>\frac{1}{2}$, then $A+A=\mathbb{N}$. Moreover,
\[\begin{eqnarray}\sigma(A+B)\geq \sigma(A)+\sigma(B)-\sigma(A)\sigma(B)\end{eqnarray}\]
for any sets $A$ and $B$ of natural numbers.

Proof. We start by showing the inequality stated above. Let $n$ be a given positive integer and $A\cap [1,n]= \{a_1,...,a_k\}$ with $a_1<...<a_k$. Also let $a_0=0$. We have
\[\begin{eqnarray}(A+B)(n)\geq \sum_{i=0}^{k-1}|(A+B)\cap (a_i,a_{i+1})|+|(A+B)\cap (a_k,n)|+k.\end{eqnarray}\]
In addition,
\[\begin{eqnarray}|(A+B)\cap (a_i,a_{i+1})|\geq |\{a_i+b: b\in B, 0<b<a_{i+1}-a_i\}|=B(a_{i+1}-a_i-1).\\\end{eqnarray}\]
Substituting this and using the definition of the Schnirelmann density, we get
\[\begin{eqnarray}(A+B)(n)&\geq& \sum_{i=0}^{k-1}\sigma(B)(a_{i+1}-a_i-1)+\sigma(B)(n-a_k)+k\\&=&\sigma(B)n+(1-\sigma(B))k\\&\geq&\sigma(B)n+(1-\sigma(B))\sigma(A)n,\end{eqnarray}\]
and the inequality follows.

Next we show that if $\sigma(A)>0$, then $\sigma(mA)>\frac{1}{2}$ for some $m$, where $mA$ is the set of all sums of exactly $m$ elements of $A$. Indeed, if $\sigma(A)=\delta\leq \frac{1}{2}$, iterating the inequality gives $\sigma(2A)\geq 2\delta-\delta^2\geq \frac{3}{2}\delta$, and further if $\sigma(2^{a-1}A)\leq \frac{1}{2}$, then $\sigma(2^aA)\geq \frac{3}{2}\sigma(2^{a-1}A)$, which is eventually larger than $\frac{1}{2}$.

Finally, let $\sigma(A)>\frac{1}{2}$. Then for any positive integer $n$ we have $A(n)>\frac{n}{2}$, so the sets $A$ and $\{n-a:a\in A,a<n\}$ intersect, meaning that $n\in A+A$. We conclude that if $\sigma(A)>0$, there exists $m$ such that $mA=\mathbb{N}.$ ■

There is also a stronger version of Schnirelmann's inequality for $\sigma(A+B)$, namely a theorem of Mann, stating elegantly that
\[\begin{eqnarray}\sigma(A+B)\geq \sigma(A)+\sigma(B)\end{eqnarray}\]
when $\sigma(A)+\sigma(B)\leq 1$ (in the opposite case it is easy to see that $A+B=\mathbb{Z}_+$). However, for our purposes it is enough to know that $\sigma(A)>0$ and $0\in A$ imply $mA=\mathbb{Z}_+$ for some $m$.

The primes as an asymptotic basis

It may seem at first non-intuitive that a result on sets of positive density could prove something about the primes, which have density zero. If $\mathbb{P}^{*}=\mathbb{P}\cup \{1\}$, then we expect $\mathbb{P}^{*}+\mathbb{P}^{*}$ to have positive Schnirelmann density by Goldbach's strong conjecture. What is quite surprising is that the density can be shown to be positive by elementary means, while Goldbach's problem remains unsolved.

Theorem 3. The set $\mathbb{P}^{*}+\mathbb{P}^{*}$ has positive lower asynptotic density (and therefore $\mathbb{P}+\mathbb{P}$ has too). Consequently, there exists $C$ such that every integer greater than $1$ is the sum of at most $C$ primes.

Note that for example proving the weak Goldbach conjecture for large enough numbers (which was done in this post) would imply the existence of such a $C$, but not the positivity of the lower asymptotic density of $\mathbb{P}+\mathbb{P}$. Of course, the weak Goldbach conjecture gives $C=4$ for large enough numbers, while the proof below would give a much larger constant.

It suffices to show the part of Theorem 3 stating that $\sigma(\mathbb{P}^{*}+\mathbb{P}^{*})>0$. Indeed, then there exists $M$ such that every positive integer is the sum of at most $M$ primes and $M$ ones. If we have $n=p_1+...+p_j+k\cdot 1$ for primes $p_i$ with $j,k\leq M$ and $k\geq 2$, then we also have $n=p_1+...+p_j+a\cdot 2+b\cdot 3$ for some $0\leq a,b\leq M$. On the other hand, if $k=1$, then $n-2=p_1'+...+p_j'+k'\cdot 1$ for some primes $p_i'$ and $j',k',\leq M$, so $n=p_1'+...+p_j'+(k+2)\cdot 1$, which is again a sum of a bounded number of primes. Finally, if $k=0$, $n$ is already a sum of primes. Now we can move on to proving Theorem 3.

Proof of Theorem 3. It suffices to show that $\mathbb{P}+\mathbb{P}$ has positive lower asymptotic density (with $\liminf$ in place of $\lim$). Let
\[\begin{eqnarray}r(n)=|\{(p,q)\in \mathbb{P}^2:n=p+q\}|.\end{eqnarray}\]
By Chebychev's estimate $\pi(x)\gg \frac{x}{\log x}$ (see this post),
\[\begin{eqnarray}\sum_{n\leq x}r(n)=|(p,q)\in \mathbb{P}^2:p+q\leq x|\gg \frac{x^2}{\log^2 x}.\end{eqnarray}\]
On the other hand, the Cauchy-Schwarz inequality gives
\[\begin{eqnarray}\left(\sum_{n\leq x}r(n)\right)^2\leq |\{(\mathbb{P}+\mathbb{P})\cap [1,x]\}|\sum_{n\leq x}r(n)^2.\end{eqnarray}\]
Hence it remains to show that
\[\begin{eqnarray}\sum_{n\leq x}r(n)^2\ll \frac{x^3}{\log^4 x}.\end{eqnarray}\]
For showing this we use the estimate for $r(n)$ from this post, based on Selberg's sieve (one can also use Brun's sieve, since they give the same result up to a constant factor). The estimate is
\[\begin{eqnarray}r(n)\ll \frac{n}{\log^2 n}\prod_{p\mid n}\left(1+\frac{1}{p}\right).\end{eqnarray}\]
Now we just need to show that
\[\begin{eqnarray}\sum_{n\leq x}\prod_{p\mid n}\left(1+\frac{1}{p}\right)^2\ll x.\end{eqnarray}\]
As $(1+\frac{1}{p})^2\leq (1+\frac{2}{p})(1+O(\frac{1}{p^2}))$, the sum above is
\[\begin{eqnarray}\ll \sum_{n\leq x}\prod_{p\mid n}\left(1+\frac{2}{p}\right).\end{eqnarray}\]
We have
\[\begin{eqnarray}\prod_{p\mid n}\left(1+\frac{2}{p}\right)=\sum_{d\mid n}\frac{2^{\nu(d)}}{d},\end{eqnarray}\]
where $\nu(\cdot)$ denotes the number of prime factors, so
\[\begin{eqnarray}\sum_{n\leq x}\prod_{p\mid n}\left(1+\frac{2}{p}\right)=\sum_{d\leq x}\frac{2^{\nu(d)}}{d}\left\lfloor \frac{x}{d}\right\rfloor\ll x,\end{eqnarray}\]
because $2^{\nu(d)}\leq \tau(d)\ll \sqrt{d}$. This completes the proof. ■

Romanoff's theorem

Now that we have show that the primes are an asymptotic basis of finite order, we turn to some other natural additive bases. The powers of a fixed positive integer $a>1$ are of course not an asymptotic basis of finite order (although any number is the sum of some number of powers of $a$, each power occurring at most $a-1$ times), but it is intriguing how these numbers fit together with primes to form an asymptotic basis. Since there are around $\log x$ powers of $a$ up to $x$ and around $\frac{x}{\log x}$ primes up to $x$, the numbers $p+a^k$ are just enough numerous to form an asymptotic basis of finite order. Romanoff's theorem confirms this belief.

Theorem 4 (Romanoff.) Let $a>1$ be a positive integer. Then the set $A=\{p+a^k:p\in \mathbb{P},k\in \mathbb{N}\}$ has positive Schnirelmann density. Consequently, there exists $C$, depending on $a$, such that every large enough positive integer is the sum of $C$ primes and $C$ powers of $a$.

Proof. We adopt the same strategy which we used to show that the primes are an asymptotic basis of finite order. We must use Schnirelmann's theorem in a slightly modified form, namely that if $\liminf_{n\to \infty}\frac{A(n)}{n}>0$, then $A$ is an asymptotic basis of finite order; otherwise we would only obtain that for each $n$ there exists $m\leq C$ such that $n$ is the sum of $m$ primes and $m$ powers of $a$. The proof of the modified Schnirelmann's theorem is almost verbatim the same.

Let $a$ be fixed and
\[\begin{eqnarray}r(n)=|\{(k,p)\in \mathbb{N}\times \mathbb{P}:n=p+a^k\}|,\end{eqnarray}\]
we have
\[\begin{eqnarray}\sum_{n\leq x}r(n)\gg \frac{x}{\log x}\cdot \log x=x,\end{eqnarray}\]
so it suffices to show that
\[\begin{eqnarray}\sum_{n\leq x}r(n)^2\ll x.\end{eqnarray}\]
We have
\[\begin{eqnarray}\sum_{n\leq x}r(n)^2&=&\sum_{n\leq x}|\{(k_1,k_2,p_1,p_2)\in \mathbb{N}^2\times \mathbb{P}^2:n=p_1+a^{k_1}=p_2+a^{k_2}\}|\\&\ll& \sum_{p_1,p_2\leq x\atop k,k'\leq \log_a x}|\{(k_1,k_2,p_1,p_2):p_1>p_2,p_1-p_2=a^{k_1}-a^{k_2}\}|+x.\end{eqnarray}\]
We again use the estimate for the number of solutions to $p_1-p_2=n$ in primes, obtaining that the sum above is bounded by
\[\begin{eqnarray}&\ll& \frac{x\log_a x}{\log^2 x}\sum_{k\leq \log_a x}\sum_{p\mid a^k-1}\left(1+\frac{1}{p}\right)\\&\ll& \frac{x}{\log x}\sum_{k\leq \log_a x}\sum_{d\mid a^k-1}\frac{\mu^2(d)}{d}.\end{eqnarray}\]
The problem boils down to showing that the sum above is $\ll \log x$. Denoting by $\text{ord}_d(a)$ the smallest positive solution $y$ to $a^y\equiv 1\pmod d$, the sum equals
\[\begin{eqnarray}&&\sum_{d\leq x}\frac{\mu^2(d)}{d}\sum_{k\leq \log_a x\atop \text{ord}_{d}(a)\mid k}1\\&\leq& \sum_{d\leq x}\frac{\mu^2(d)}{d\cdot \text{ord}_{d}(a)}\\&=&\sum_{t\leq x}\frac{1}{t}\sum_{\text{ord}_{d}(a)=t}\frac{\mu^2(d)}{d}\\&\ll& \sum_{\ell \leq \log_2 x}\frac{1}{2^{\ell}}\sum_{2^{\ell}\leq \text{ord}_d(a)< 2^{\ell+1}}\frac{\mu^2(d)}{d}\\&\ll& \sum_{\ell \leq \log_2 x}\frac{1}{2^{\ell}}\sum_{d\mid \prod_{2^{\ell}\leq k<2^{\ell+1}}(a^k-1)}\frac{\mu^2(d)}{d}.\quad (1)\end{eqnarray}\]
To finish bounding the sum, we need an estimate for
\[\begin{eqnarray}\sum_{d\mid n}\frac{\mu^2(d)}{d}.\end{eqnarray}\]
We may assume without loss of generality that $n$ is square free. Then
\[\begin{eqnarray}&&\sum_{d\mid n}\frac{\mu^2(d)}{d}\leq 1+\sum_{p\mid n}\frac{1}{p}+\frac{1}{2!}\left(\sum_{p\mid n}\frac{1}{p}\right)^2+\frac{1}{3!}\left(\sum_{p\mid n}\frac{1}{p}\right)^3+...\\&=&\exp\left(\sum_{p\mid n}\frac{1}{p}\right),\end{eqnarray}\]
owing to the fact that when the $j$th power sum above is multiplied out, each term occurs $j!$ times. To proceed with the estimate, write $n=p_1...p_j$ with distinct primes $p_i$, so that $n\geq 2^{(1-o(1))j\log j}$ by Chebychev's estimate. Mertens' theorem (from this post) together with Chebychev's estimate (this time in the form $p_n\ll n\log n$ for the $n$th prime $p_n$)
\[\begin{eqnarray}\sum_{p\mid n}\frac{1}{p}\leq \log \log (j\log j)+O(1) ,\end{eqnarray}\]
\[\begin{eqnarray}\sum_{p\mid n}\frac{1}{p}\leq (1+o(1))\log \log \log n.\end{eqnarray}\]
Finally, we see that
\[\begin{eqnarray}\sum_{d\mid n}\frac{1}{d}\ll (\log \log n)^{1+o(1)},\end{eqnarray}\]
and hence $(1)$ is
\[\begin{eqnarray}&\ll& \sum_{\ell\leq \log_2 x}\frac{1}{2^{\ell}}(\log \log a^{4^{\ell}})^2\\&\ll& \sum_{\ell\leq \log_2 x}\frac{\ell^2}{2^{\ell}}\ll 1.\end{eqnarray}\]
The claim is now proved. ■

It is worth noticing how few properties of the numbers $a^k$ were used in the proof. We essentially used only the fact that $\text{ord}_{d}(a)\leq d$ in addition to the trivial factoring of $a^{k_1}-a^{k_2}$ and the fact that there are $\gg \log x$ powers of $a$ up to $x$.

Waring's problem

We give one more application of the Schnirelmann density to give an elementary proof of Waring's conjecture. As far back as in 1770, Waring conjectured that for each positive integer $k\geq 2$ there exists $c(k)$ such that every positive integer is the sum of $c(k)$ numbers of the form $x^k,x\in \mathbb{N}$. The smallest such $c(k)$ is the Waring number, called $g(k).$ Largange's famous four square theorem from 1770, the same year as Waring's problem, established that $g(2)\leq 4$, and it is easy to see that any number of the form $4^a(8b+7)$, where $a$ and $b$ are natural numbers, is not a sum of three squares. It follows that $g(2)=4$, with an elementary proof, but the other cases are considerably harder. In 1859, Liouville proved the finiteness of $g(4)$, or more precisely $g(4)\leq 53$, relying on an identity that reduces the sum of a large number of fourth powers into a sum of sums of squares. More precisely, he observed that
and that the left-hand side represents an arbitrary number of the form $6n^2$. In 1909, Hilbert was able to settle Waring's conjecture for the first time, deriving extremely complicated identities that reduce a huge number of $k$ th powers into sums of lower powers. A completely different, and much more influential, solution was given in 1920 by Hardy and Littlewood, who developed their circle method for tackling this problem (borrowing ideas from an earlier paper of Hardy and Ramanujan). The proof of Hardy and Littlewood was nevertheless still quite complicated, and relied on tedious estimates for complex integrals. Later solutions in the same spirit use Fourier analysis in place of complex analysis, but are still far from elementary. In 1943, Linnik was able to give a proof based only on the very basic estimates for exponential sums, together with Schnirelmann's theorem. We follow a similar but simplified proof from Pollack's book.

Theorem 5. The Waring numbers $g(k)$ are finite.

For the proof, we define the counting function
\[\begin{eqnarray}r_t^k(n)=|\{n=x_1^k+...+x_t^k:x_i\in \mathbb{N}\}|.\end{eqnarray}\]
We have
\[\begin{eqnarray}\sum_{n\leq x}r_t^k(n)\gg \left(\frac{x^{\frac{1}{k}}}{2t^{\frac{1}{k}}}\right)^t\gg_{k,t} x^{\frac{t}{k}}.\end{eqnarray}\]
\[\begin{eqnarray}\sum_{n\leq x}r_t^k(n)\leq \max_{n\leq x} r_t^k(n)\cdot |\{n\leq x:n=x_1^k+...+x_t^k\}|,\end{eqnarray}\]
it is enough to show that
\[\begin{eqnarray}r_t^k(n)\ll_{k} n^{\frac{t}{k}-1}\end{eqnarray}\]
for some $t$ depending on $k$. This is what we are going to show, in a bit stronger form.

Theorem 6 (Linnik). Let $g=2^{k+2}$. Then $r_g^{k}(n)\ll_k n^{\frac{g}{k}-1}$. Consequently, the numbers $x_1^k+...+x_k^g,x_1,...,x_n\in \mathbb{N}$ have positive asymptotic density and the Waring numbers $g(k)$ are finite.

The key idea in the proof is use a simple version of the Hardy-Littlewood circle method, which returns the problem of bounding $r_t^k(n)$ to finding cancellation in the integral over $[0,1]$ of an analogous exponential sum. We then estimate the contribution of the integral on the major arcs (near rationals with a small denominator) and the minor arcs (the complement of the mjor arcs), but, unlike Hardy and Littlewood, are able to do it in a concise and simple way.

Set $N=\lfloor n^{\frac{1}{k}}\rfloor$ and $e(x)=e^{2\pi i x}$. We have the identity
\[\begin{eqnarray}r_t^k(n)&=&\left|\int_{0}^{1}\left(\sum_{m\leq N}e(\alpha m^k)\right)^te(-\alpha n)d\alpha \right|\end{eqnarray}\]
is $1$ for $m=0$ and $0$ for $m\neq 0$, and when the exponential sum is multiplied out, it becomes the sum of the terms $e(x_1^k+...+x_t^k-n)$. The key observation in the circle method is that estimating such integrals is much easier than attacking the problem directly. By triange inequality we can simply bound
\[\begin{eqnarray}r_t^k(n)\leq \int_{0}^{1}\left|\sum_{m\leq N}e(\alpha m^k)\right|^t d\alpha,\end{eqnarray}\]
and the problem is reduced to studying the exponential sum occurring inside the integral.

\[\begin{eqnarray}f(\alpha)=\sum_{m\leq N}e\left(\alpha m^k\right).\end{eqnarray}\]
Although the way we deal with $f(\alpha)$ is elementary, it resembles the proof of the Vinogradov three primes theorem, for instance. Indeed, we want to bound $f(\alpha)$ pointwise, and this is achieved separately for $\alpha$ belonging to a major arc and $\alpha$ belonging to a minor arc. In the end, it turns out that the bound is the same in both cases. In the major arc case, when $\alpha$ is close (in a Diophantine sense) to a rational $\frac{a}{q}$ with a small denominator, we can approximate $f(\alpha)$ with $f(\frac{a}{q})$ and decompose the sum defining $f(\frac{a}{q})$ into the terms coming from different residue classes modulo $q$. In the minor arc case, on the other hand, the pointwise bound we obtain is so good that we can just compare $f(\alpha)$ and $f(\frac{a}{q})$ in a trivial fashion, and then use the bound for $f(\frac{a}{q})$. The estimate that we need for $f(\alpha)$ is given by the following lemma.

Lemma 7. Let $\varepsilon>0$, $\alpha\in \mathbb{R}$, $k\geq 2$ integral, and $|\alpha-\frac{a}{q}|\leq \frac{1}{q^2}$ with $a$ and $q\leq N$ coprime positive integers. Then
\[\begin{eqnarray}\sum_{n\leq N}e\left(\alpha P(n)\right)\ll_{\varepsilon,k}N^{1+\varepsilon}q^{-2^{1-k}}.\end{eqnarray}\]
for any monic polynomial $P(x)$ of degree $k$ with real coefficients.

This lemma follows immediately from the basic estimate for Weyl sums, proved in this post (see Theorem 7 there). Note that the coefficients of $P(x)$ have no influence here. This result allows us to bound the exponential sums $f(\alpha)$ in the following way.

Lemma 8. Let $\nu=\frac{1}{3}$, and define the intervals
\[\begin{eqnarray}I_j(q,a)=\left\{\alpha:\left|\alpha-\frac{a}{q}\right|\leq \frac{1}{qN^{k-\nu}},\,\, \frac{j}{N^k}\leq \left|\alpha-\frac{a}{q}\right|<\frac{j+1}{N^k}\right\}\end{eqnarray}\]
\[\begin{eqnarray}0\leq a\leq q\leq N^{k-\nu},\quad (a,q)=1\quad j\geq 0.\end{eqnarray}\]
For $\alpha\in I_j(q,a)$, one has the bound
\[\begin{eqnarray}f(\alpha)\ll \frac{N}{(q+j)^{2^{-k}}}.\end{eqnarray}\]

The intervals $I_j(q,a)$ are what we call major and minor arcs, depending on the size of $q$. Before proving this lemma, we show that it implies the solution to Waring's problem. Putting $g=2^{k+2}$, the lemma gives
\[\begin{eqnarray}|f(\alpha)|^g\leq \frac{N^{g}}{(q+j)^4}.\end{eqnarray}\]
Since $I_j(q,a)$ has length at most $2N^{-k}$, we get
\[\begin{eqnarray}\int_{I_j(q,a)}|f(\alpha)|^g d\alpha\ll \frac{N^{g-k}}{(q+j)^4}\ll \frac{n^{\frac{g}{k}-1}}{(q+j)^4}.\end{eqnarray}\]
By Dirichlet's approximation theorem (which follows just from the pigeonhole principle), every real number in $[0,1]$ belongs to some $I_j(q,a)$. Hence, we conclude that
\[\begin{eqnarray}r_g^{k}(n)&\ll& \int_{0}^{1}|f(\alpha)|^{g}\\&\ll& n^{\frac{g}{k}-1}\sum_{q\leq N^{k-\nu}}\sum_{0\leq a\leq q\atop (a,q)=1}\sum_{j\geq 0}\frac{1}{(q+j)^4}\\&\ll& n^{\frac{g}{k}-1}\sum_{q\leq N^{k-\nu}}\sum_{j\geq 0}\frac{1}{(q+j)^3}\\&\ll& n^{\frac{g}{k}-1}\sum_{q\leq N^{k-\nu}}\frac{1}{q^2}\ll n^{\frac{g}{k}-1},\end{eqnarray}\]
using the fact that $\sum_{j\geq 0}\frac{1}{(q+j)^3}\ll q^{-2}$. Now $g(k)\leq 2^{k+2}$ is proved under the validity of Lemma 8.

Proof of Lemma 8. Let $\alpha\in I_j(q,a)$. In the minor arc case $q> N^{2\nu}$, we automatically have $j=0$, and also $|\alpha-\frac{a}{q}|\leq \frac{1}{q^2}$. Therefore, choosing $\varepsilon$ small enough, the required estimate follows immediately from Lemma 7.

We are left with the the major arc case $q\leq N^{2\nu}$. Let $\alpha \in I_j(q,a)$ and
We have
\[\begin{eqnarray}f(\alpha)&=&\sum_{m\leq N}e\left(\frac{am^k}{q}\right)e\left(\left(\alpha-\frac{a}{q}\right)m^k\right)\\&=&1+S_1+AS_2\end{eqnarray}\]
\[\begin{eqnarray}S_1=\sum_{1\leq m\leq N}\left(e\left(\frac{am^k}{q}\right)-A\right)e\left(\left(\alpha-\frac{a}{q}\right)m^k\right)\end{eqnarray}\]
\[\begin{eqnarray}S_2=\sum_{1\leq m\leq N}e\left(\left(\alpha-\frac{am^k}{q}\right)m^k\right).\end{eqnarray}\]
We bound $S_1$ by partial summation for the sequences $a_m=e\left(\frac{am^k}{q}\right)-A$ and $g(m)=\left(\left(\alpha-\frac{a}{q}\right)m^k\right).$ The partial sums of $a_m$ are always bounded by $2q$, owing to the averaging factor $A$. The function $g(x)$ is bounded by $1$, and $|g'(x)|\ll |\alpha-\frac{a}{q}|\cdot kx^{k-1}$. Therefore, partial summation implies
\[\begin{eqnarray}|S_1|\ll q+q\cdot \int_{0}^{N}|g'(x)|dx\ll q+q\left|\alpha-\frac{a}{q}\right|N^k\ll q+N^{\nu}\ll N^{2\nu}\end{eqnarray}\]
by the assumption on $q$. For $S_2$, we use the Euler-Maclaurin summation formula in its very basic form
where $\{x\}$ is the fractional part and
\[\begin{eqnarray}F(u)=\int_{0}^N e(ux^k)dx.\end{eqnarray}\]
Collecting the different estimates, we end up with
\[\begin{eqnarray}f(\alpha)&=&A\cdot F\left(\alpha-\frac{a}{q}\right)+O\left(|A|\frac{N^{\nu}}{q}\right)+O(N^{2\nu})\\&\ll&_k q^{-2^{-k}}\left|F\left(\alpha-\frac{a}{q}\right)\right|+N^{2\nu}\end{eqnarray}\]
by Lemma 7. By changing the variable in the integral defining $F(\alpha-\frac{a}{q})$ and using the convergence of $\int_{0}^{\infty}e(x^k)dx$, coming from partial integration, we see for $j\geq 1$ that
\[\begin{eqnarray}f(\alpha)&\ll&_k q^{-2^{-k}}N\left|\alpha-\frac{a}{q}\right|^{-\frac{1}{k}}+N^{2\nu}\ll q^{-2^{-k}}Nj^{-\frac{1}{k}}+N^{2\nu}\\&\ll& N(q+j)^{-2^{-k}},\end{eqnarray}\]
because $q,j\leq N^{2\nu}$. For $j=0$, we just use $|F(\alpha-\frac{a}{q})|\leq N$, and obtain the same bound as for $j\geq 1.$ This is exactly the wanted bound, so Linnik's theorem is proved. ■