Tuesday, August 19, 2014

Lacunary trigonometric series

In this post, we consider trigonometric series that do not necessarily arise from Fourier series. In particular, we consider lacunary trigonometric series
\[\begin{eqnarray}\sum_{n=0}^{\infty}(a_k\cos(2\pi n_k x)+b_k\sin(2\pi n_k x)),\quad\frac{n_{k+1}}{n_k}\geq \lambda >1\quad \text{for all}\quad k. \quad \quad (1)\end{eqnarray}\]
(It turns out that the sine and cosine form is more convenient than the complex exponential form in what follows). In other words, the frequencies of the trigonometric functions always grow by at least a factor of $\lambda$. It turns out that for such series, one can formulate some interesting convergence and integrability results that are not generally valid for trigonometric series. For example, Zygmund showed in 1930 that a lacunary series converges in a set of positive measure iff the sum of squares of its coefficients converges iff the lacunary series converges almost everywhere. This answers very broadly the question of convergence of such a series, while for a general trigonometric series the task of finding such a convergence criterion is hopeless. In the same paper, Zygmund proved that a lacunary series always converges in a dense set that even has the cardinality of the real numbers, unless the series is trivially divergent, that is $a_k^2+b_k^2\not \to 0$.

Another intriguing property of lacunary trigonometric series is that if such a series converges pointwise almost everywhere, its value distribution is random; in fact, the value distribution of its partial sums approaches the normal distribution. This was shown by Salem and Zygmund in 1947. It is also interesting to study whether a lacunary series is the Fourier series of some integrable function. Kolmogorov proved in 1924 that if this is the case, then this lacunary Fourier series converges almost everywhere, so for lacunary Fourier series, there are no problems with convergence (this of course follows from Carleson's result about pointwise almost everywhere convergence of $L^2$-functions, but the proof for lacunary series is much simpler, though not trivial).

It must also be mentioned that lacunary trigonometric series, whose frequencies grow too fast compared to the decay of the coefficients, are a good way to construct continuous nowhere differentiable functions, and indeed Weierstrass' example is a lacunary series.

The study of trigonometric series is also closely related to complex analysis, since the boundary values of an analytic function $f(z)=\sum_{n=0}^{\infty}a_nz^n$, defined in the unit disc, are given by the series $\sum_{n=0}^{\infty}a_ne^{in x}$, and conversely, a trigonometric series gives rise to an analytic function in the unit disc, whose boundary values are given by the trigonometric series, unless the coefficients grow too fast.

The above results (which are just a tiny proportion of what is known) should motivate the study of (lacunary) trigonometric series quite well. We will prove some of the mentioned results, following closely the original papers mentioned above.


 Lacunary Fourier series

We start with Kolmogorov's result, stating that any lacunary Fourier series converges pointwise almost everywhere (of course, the assumption that this is a Fourier series is crucial).

Theorem 1 (Kolmogorov). Let
\[\begin{eqnarray}\sum_{k=1}^{\infty}(a_k\cos(2\pi n_k x)+b_k\sin(2\pi n_k x)),\quad\frac{n_{k+1}}{n_k}\geq \lambda>1\end{eqnarray}\]
be a lacunary series that is the (trigonometric) Fourier series of some $L^1$-function $f$. Then this series converges pointwise almost everywhere to $f$.

Proof. Let $S_m=S_m(f,x)$ be the partial sums of the series, and let
\[\begin{eqnarray}\sigma_m=\frac{S_0+...+S_{m-1}}{m}.\end{eqnarray}\]
Then $\sigma_m$ converges pointwise almost everywhere to $f$. To see this, write $\sigma_m(x)=F_m*f(x)$, where $F_m$ are the Fejér kernels from an earlier post. Similarly as in the earlier post earlier post, we find
\[\begin{eqnarray}|f(x)-f*F_m(x)|&\leq& \int_{-\frac{1}{2}}^{\frac{1}{2}}|f(x)-f(x-y)||F_m(y)|dy\\&=&\int_{-\delta}^{\delta}|f(x)-f(x-y)||F_m(y)|dy\\&+&\int_{\delta<|y|<\frac{1}{2}}|f(x)-f(x-y)||F_m(y)|dy.\end{eqnarray}\]
Since $F_m(y)\geq 0$ and $\int_{0}^1 F_m(y)dy=1$ imply $|F_m(y)|\leq 1$, we can estimate the integral over $[-\delta,\delta]$ by replacing $|F_m(y)|$ with $1$. This integral converges to $0$ as $\delta\to 0$ by a lemma from real analysis that was mentioned in the earlier post. The formula for the Fejér kernel gives $|F_m(y)|\leq \frac{c}{mx^2}$ for some absolute constant $c$, so the second integral is bounded by
\[\begin{eqnarray}\int_{-\frac{1}{2}}^{\frac{1}{2}}|f(x)-f(x-y)|dy\cdot \frac{c}{m\delta^2},\end{eqnarray}\]
which is at most a constant times $\delta$ when $m\geq \delta^{-3}$. As $\delta>0$ was arbitrary, we obtained the convergence $\sigma_m(x)\to f(x)$ almost everywhere.

Now to finish the proof of the theorem, it suffices to show $|S_m-\sigma_m|\to 0$. It holds that
\[\begin{eqnarray}|S_{n_m}-\sigma_{n_m}|\leq \sum_{j=1}^{m} \frac{n_j}{n_m}(|a_{n_j}|+|b_{n_j}|),\end{eqnarray}\]
since if one writes out $S_{n_m}-\sigma_{n_m}$, there are $n_j$ terms that involve $a_{n_j}$, and similarly for $b_{n_j}$. Choose $\varepsilon>0$. By the Riemann-Lebesgue lemma, there exits $M_{\varepsilon}$ such that $|a_{n_j}|+|b_{n_j}|<\varepsilon$ for $k\geq M_{\varepsilon}.$ Also $|a_{n_j}|+|b_{n_j}|\leq C$ for some constant $C$. Therefore,
\[\begin{eqnarray}|S_{n_k}-\sigma_{n_k}|\leq C\sum_{j=1}^{M_{\varepsilon}} \frac{n_j}{n_m}+\varepsilon\sum_{j=1}^{\infty}\frac{n_j}{n_m}.\end{eqnarray}\]
By our lacunarity assumption, the second sum is bounded by $\sum_{j=1}^{\infty}\lambda^{-j}=\frac{\lambda}{\lambda-1}$. We get for $|S_{n_k}-\sigma_{n_k}|$ an upper bound
\[\begin{eqnarray}\frac{CM_{\varepsilon}n_{M_{\varepsilon}}}{n_m}+\varepsilon\cdot\frac{\lambda}{\lambda-1}.\end{eqnarray}\]
When $m$ is large enough in terms of $\varepsilon$, this is further bounded by
\[\begin{eqnarray}2\varepsilon\cdot \frac{\lambda}{\lambda-1},\end{eqnarray}\]
and letting $\varepsilon\to 0$, we obtain the theorem. ■

Convergence of lacunary trigonometric series

Next we formulate a result that in practice solves the question of pointwise convergence of lacunary trigonometric series.

Theorem. (Zygmund). The lacunary series $(1)$ converges in a set of $x$ that has positive measure if and only if
\[\begin{eqnarray}\sum_{k=1}^{\infty}(a_k^2+b_k^2)<\infty.\end{eqnarray}\]
In that case, the series $(1)$ is an $L^2$-function, so it has a finite value almost everywhere.\\

(Zygmund also proved the generalization that it suffices to assume that the sum of squares of the coefficients is Abel summable).

Proof. One direction of this result is very easy: If $\sum_{k=1}^{\infty}(a_k^2+b_k^2)$ converges, then, as we showed in an earlier post, the lacunary series is an $L^2$-function, and so the series $(1)$ is convergent almost everywhere.

Assume now that the series $(1)$ converges for $x\in E$, where $E\subset[0,1]$ has positive measure. Then there is some constant $C$ such that
\[\begin{eqnarray}\left|\sum_{k=1}^{N}(a_k\cos(2\pi n_kx)+b_k\sin(2\pi n_kx))\right|=\left|\sum_{k=1}^{N}c_k\cos(2\pi n_kx+\theta_k)\right|\leq C\end{eqnarray}\]
for all $N$ and for $x\in F$, where $F\subset E$ has positive measure (since otherwise for every $C$ and almost every $x$ the $N$th partial sum would be larger than $C$ in absolute value for some $N$, a cotradiction). Here $c_k=a_k^2+b_k^2$ and $\theta_k$ are the shifts that one needs when combining sines and cosines. Hence also
\[\begin{eqnarray}\left|\sum_{k=M}^{N}c_k\cos(2\pi n_kx+\theta_k)\right|\leq 2C\end{eqnarray}\]
for all $M\leq N$. Integrating, we find
\[\begin{eqnarray}\int_F \left( \sum_{k=M}^{N}c_k\cos(n_kx+\theta_k)\right)^2dx\leq 4C^2\cdot m(F).\end{eqnarray}\]
We expand the square to get something that resembles $\sum_{k=M}^{N}c_k^2.$
\[\begin{eqnarray}&&\sum_{k=M}^N \int_F c_k^2\cos^2(2\pi n_kx+\theta_k)dx\nonumber\\&+&2\sum_{k,\ell=M\atop k>\ell}^N\int_F c_k c_{\ell}\cos(2\pi n_k x+\theta_k)\cos(2\pi n_{\ell}x+\theta_{\ell})dx\leq 4C^2\cdot m(F)\quad \quad (2).\end{eqnarray}\]
Let $\alpha_m=\int_{F}\cos(2\pi mx)dx,\beta_m=\int_{F}\sin(2\pi mx)dx$. Then
\[\begin{eqnarray}\sum_{k=M}^N \int_F c_k^2\cos^2(2\pi n_kx+\theta_k)dx&=&\sum_{k=M}^{N}\left(\frac{1}{2}c_k^2+\frac{1}{2}\int_F c_k^2 \cos(2(2\pi n_kx+\theta_k))dx\right)\\&=&\sum_{k=M}^{N}\frac{1}{2}c_k^2(1+\alpha_{2n_{k}}\cos \theta_k-\beta_{2n_k}\sin(\theta_k)\\&\geq& \sum_{k=M}^{N}\frac{1}{2}c_k^2(1-|\alpha_{2n_k}|-|\beta_{2n_k}|).\end{eqnarray}\]
Since $\alpha_m$ and $\beta_m$ are the Fourier coefficients of the integrable function $1_F$, they converge to zero. Therefore, if $M\geq M_0$, the first sum in $(2)$ is at least
\[\begin{eqnarray}\frac{1}{3}\sum_{k=M}^{N}c_k^2,\end{eqnarray}\]
and now it is enough to show that the second sum in $(2)$ is bounded to finish the proof. Using the summation formula for cosines, we see that
\[\begin{eqnarray}&&2\sum_{k,\ell=M\atop k>\ell}^N\int_F c_k c_{\ell}\cos(2\pi n_k x+\theta_k)\cos(2\pi n_{\ell}x+\theta_{\ell})dx\\&=&\sum_{k,\ell=M\atop k>\ell}^N\int_F c_k c_{\ell}(\cos((2\pi n_k+n_{\ell})x+(\theta_k+\theta_{\ell}))+\cos((2\pi n_k-n_{\ell})x+(\theta_k-\theta_{\ell})))dx\\&=&\sum_{k,\ell=M\atop k>\ell}^N c_kc_{\ell}(\alpha_{n_k+n_{\ell}}\cos(\theta_k+\theta_{\ell})-\beta_{n_k+n_{\ell}}\sin(\theta_k+\theta_{\ell})\\&&\quad\quad\quad+\alpha_{n_k-n_{\ell}}\cos(\theta_k-\theta_{\ell})-\beta_{n_k-n_{\ell}}\sin(\theta_k-\theta_{\ell})).\end{eqnarray}\]
If we denote
\[\begin{eqnarray}b_{k,\ell}=\alpha_{n_k+n_{\ell}}\cos(\theta_k+\theta_{\ell})-\beta_{n_k+n_{\ell}}\sin(\theta_k+\theta_{\ell})+\alpha_{n_k-n_{\ell}}\cos(\theta_k-\theta_{\ell})-\beta_{n_k-n_{\ell}}\sin(\theta_k-\theta_{\ell}),\end{eqnarray}\]
then
\[\begin{eqnarray}b_{k,\ell}^2\leq 2(\alpha_{n_k+n_{\ell}}^2+\beta_{n_k+n_{\ell}}^2+\alpha_{n_k-n_{\ell}}^2+\beta_{n_k-n_{\ell}}^2)\quad \quad (3).\end{eqnarray}\]
By Cauchy-Schwarz, the second sum in $(2)$ is at most
\[\begin{eqnarray}\sum_{k,\ell=M\atop k>\ell}^N|c_kc_{\ell}b_{k,\ell}|&\leq& \sqrt{\sum_{k,\ell=M\atop k>\ell}^N c_k^2c_{\ell}^2 \sum_{k,\ell=M\atop k>\ell}^N b_{k,\ell}^2}\\&\leq& \sum_{k=M}^{N}c_k^2 \sqrt{\sum_{k,\ell=M\atop k>\ell}^N b_{k,\ell}^2}.\end{eqnarray}\]
If we show that
\[\begin{eqnarray}\sum_{k,\ell=M\atop k>\ell}^{\infty} b_{k,\ell}^2\quad \quad (4)\end{eqnarray}\]
converges, we are done, since then for large enough $M$, that sum is at most $\frac{1}{4},$ so $(2)$ is at least $(\frac{1}{3}-\frac{1}{4})\sum_{k=M}^N c_k^2$, so the partial sums of this series are bounded.

The sum (4) can be split into four parts using (3); we show that the first part converges, and the others can be proved to converge similarly. At this point lacunarity is important. We notice that the number of representations of $N$ in the form $N=n_k+ n_{\ell}$ is bounded by a constant $C(\lambda)$, since if $n_k+ n_{\ell}=n_r+ n_s$ and $k\geq \ell,r\geq s, k>r,$ we have
\[\begin{eqnarray}n_k- n_{r}\geq(\lambda^{k-r}-1)n_r>n_{r-1}> n_s-n_{\ell}\end{eqnarray}\]
if $\lambda> \frac{1}{\lambda^{k-r}-1}$, which holds if $k-r$ is large as a function of $\lambda$. Hence our assumptions leads to $k-r$ being bounded, and if $k-r$ is bounded, then $r-s$ must also be bounded.

Thus, it suffices to show that
\[\begin{eqnarray}\sum_{k=M}^{\infty} \alpha_m^2\end{eqnarray}\]
converges, and this is true since $\alpha_m$ are the cosine Fourier coefficients of an $L^2$-function, namely $1_E$. The proof is now finished. ■

Limiting distribution of lacunary series

Next we prove the result about the value distribution of a lacunary series converging to the normal distribution. We will in fact see that the value distribution is always gaussian, unless there is a trivial reason why it could not be (see (ii) below).

Theorem 3 (Salem--Zygmund). Let
\[\begin{eqnarray}S_N(x)=\sum_{k=1}^{N}(a_k\cos(n_k x)+b_k\sin(n_k x)),\quad \frac{n_{k+1}}{n_k}\geq \lambda\end{eqnarray}\]
be the partial sums of a lacunary trigonometric series, and let
\[\begin{eqnarray}C_N=\sqrt{\frac{1}{2}\sum_{n=1}^{N}c_k^2},\end{eqnarray}\]
where $c_k^2=a_k^2+b_k^2$.

(i) If $\frac{c_n}{C_n}\to 0$ and $C_n\to \infty$ as $n\to \infty$, then for any measurable $E\subset [0,2\pi]$ of positive measure we have
\[\begin{eqnarray}\frac{1}{m(E)}m\left(x\in E:\frac{S_n(x)}{C_n}\leq t\right)\xrightarrow{n\to \infty}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{t} e^{-\frac{y^2}{2}}dy,\end{eqnarray}\]
where $m(\cdot)$ denotes the Lebesgue measure.
(ii) If $\frac{S_N(x)}{C_N}$ is bounded in a set of positive measure and its value distribution converges to any distribution function $F(t)$ at the continuity points of $F$, such that $F(t)\neq 0$ for all $t$ or $F(t)\neq 1$ for all $t$, then $F$ is the distribution function of the standard normal distribution.

Proof. We first that (i) implies (ii). It suffices to show that in the situation of (ii), $\frac{c_n}{C_n}\to 0$ holds. Assume this is not the case; then $\frac{c_N}{C_N}\geq \sqrt{2}\varepsilon$ for some $\varepsilon>0$ and infinitely many $N$. Since
\[\begin{eqnarray}\frac{S_N(x)}{C_N}=\frac{S_{N-1}(x)}{C_{N-1}}\cdot \frac{C_{N-1}}{C_N}+\frac{a_N\cos(n_Nx)+b_k\sin(n_Nx)}{C_N},\end{eqnarray}\]
we see that if $\frac{S_{N-1}(x)}{C_ {N-1}}\leq t$, then $\frac{S_N(x)}{c_N}\leq t\sqrt{1-\varepsilon^2}+\sqrt{2}$ because $\frac{C_{N-1}^2}{C_N^2}=1-\frac{c_N^2}{C_N^2}$.

Hence the assumption about convergence in distribution gives $F(t)\leq F(t\sqrt{1-\varepsilon^2}+\sqrt{2})$ at all continuity points $t$ of $F(t)$. Since $F(t)$ is monotonic, there are only countably many discontinuity points, so the inequality holds for some values of $t$ that are arbitrarily large. However, this is a contradiction since $t>t\sqrt{1-\varepsilon^2}+\sqrt{2}$ for large $t$.

We now turn to (i). We make the additional assumption $\lambda\geq 3$, since this allows us to skip some technical computations and cut the length of the proof to half (one can find the general case in the paper by Salem and Zygmund). For simplicity, we also assume that $S_N(x)$ is a cosine series, but the general case is similar. We denote $C_N=A_N$.

By Levy's theorem, the distribution $F_N(t,E)$ of $\frac{S_N(x)}{A_N}$ (restricted to $E$) converges in distribution to the normal distribution if and only if its characteristic function converges pointwise to $e^{-\frac{\xi^2}{2}}$, which is the characteristic function of the standard gaussian. The characteristic function of $F_N(t,E)$ is, using the substitution $t=\frac{S_N(x)}{A_N}$, equal to
\[\begin{eqnarray}\int_{-\infty}^{\infty}e^{it\xi}dF_N(t,E)dt&=&m(E)^{-1}\int_{E}e^{i\frac{S_N(x)}{A_N}\xi}dx\\&=&\int_{E}\exp\left(iA_N^{-1}\sum_{k=1}^N a_k\cos(n_k x)\xi\right)dx\\&=&\int_{E}\prod_{k=1}^N \exp\left(iA_N^{-1} a_k\cos(n_k x)\xi\right)dx.\end{eqnarray}\]
We assume $|\xi|\leq M$ for some $M$, where $M$ is arbitrary. The estimate $e^z=(1+z)e^{\frac{1}{2}z^2+o(z^2)}$ uniformly as $z\to 0$ implies that the previous integral is
\[\begin{eqnarray}e^{o(1)}m(E)^{-1}\int_{E}\prod_{k=1}^{N}\left(1+i\xi A_N^{-1}a_k\cos(n_kx)\right)\exp\left(-\frac{1}{2}A_N^{-2}a_k^2\cos^2(n_kx)\xi^2\right)dx\quad (5);\end{eqnarray}\]
the error $e^{o(1)}$ comes from the fact that
\[\begin{eqnarray}\prod_{k=1}^N \exp(o(A_N^{-2}a_k^2\cos^2(n_k x)\xi^2) )=\exp\left(o\left(M^2\cdot\sum_{k=1}^{N}\frac{a_k^2}{A_N^2}\cos(n_k x) \right) \right)=e^{o(1)}.\end{eqnarray}\]
(we were allowed to move the $o$ sign outside since the estimate $e^z=(1+z)e^{\frac{1}{2}z^2+o(z^2)}$ was uniform).
Our task is to estimate the terms of $(5)$. The second product in $(5)$ is
\[\begin{eqnarray}&&\prod_{k=1}^{N}\exp\left(-\frac{1}{2}A_N^{-2}a_k^2\cos^2(n_kx)\xi^2\right)\\&=&\prod_{k=1}^{N}\exp\left(-\frac{1}{4}A_N^{-2}a_k^2 \xi^2\right)\cdot \exp\left(-\frac{1}{4}A_N^{-2}a_k^2\cos(2n_k x)\xi^2\right)\\&=&e^{-\frac{\xi^2}{2}}\exp\left(\sum_{k=1}^{N}-\frac{1}{4}A_N^{-2}a_k^2\cos(2n_k x)\xi^2\right)\end{eqnarray}\]
since $\sum_{k=1}^{N}\frac{a_k^2}{A_N^2}=2.$ Let
\[\begin{eqnarray}\Sigma_N(x)=\sum_{k=1}^{N}-\frac{1}{4}A_N^{-2}a_k^2\cos(2n_k x).\end{eqnarray}\]
Then
\[\begin{eqnarray}m(\{x\in E:|\Sigma_N(x)|\geq N^{-\frac{1}{5}}\})&\leq& \int_E \left(\frac{|\Sigma_N(x)|}{N^{\frac{1}{5}}}\right)^2dx\\&=&N^{-\frac{2}{5}}\sum_{k=1}^N \frac{\pi a_k^4}{ 8A_N^4}=O(N^{-\frac{1}{5}})\end{eqnarray}\]
by Parseval's formula, and the upper bound $O(N^{-\frac{1}{5}})$ arises from the fact that $\sum_{k=1}^{N}a_k^4\leq \sqrt{N} \left(\sum_{k=1}^N a_k^2\right)^2=\sqrt{N}A_N^4$, which is true by Cauchy-Schwarz. As $|\Sigma_N(x)|\leq 1$, the absolute value of the integral in $(5)$ is
\[\begin{eqnarray}&&\left|\int_{E}\prod_{k=1}^{N}\left(1+i\xi A_N^{-1}a_k\cos(n_kx)\right)\exp(\Sigma_N(x)\xi^2)dx\right|\\&\leq&\int_{E}\prod_{k=1}^{N}\left|1+i\xi A_N^{-1}a_k\cos(n_kx)\right|e^{O(1)}dx\\&+&\int_{\{x\in E:|\Sigma_N(x)|\geq N^{-\frac{1}{5}}\}}\prod_{k=1}^{N}\left|1+i\xi A_N^{-1}a_k\cos(n_kx)\right|e^{O(N^{-\frac{1}{5}})}dx.\end{eqnarray}\]

Because of $|1+ix|\leq e^{x^2}$, we have
\[\begin{eqnarray}\prod_{k=1}^{N}|1+i\xi A_N^{-1}a_k\cos(n_kx)|\leq \prod_{k=1}^{N}\exp(A_N^{-2}a_k^2 \xi^2)\leq e^{2M^2},\end{eqnarray}\]
so the contribution of the second integral above is $o(1)$ as we are integrating a bounded function over a set whose measure is $o(1)$.

Therefore, we are left with proving
\[\begin{eqnarray}\int_{E}\prod_{k=1}^{N}\left(1+i\xi A_N^{-1}a_k\cos(n_k x)\right)dx\xrightarrow{N\to \infty} m(E).\end{eqnarray}\]
Multiplying out, we may write
\[\begin{eqnarray}\prod_{k=1}^{N}\left(1+i\xi A_N^{-1}a_k\cos(n_k x)\right)=1+\sum_{k=1}^{\infty}\alpha_k(N)\cos(kx),\end{eqnarray}\]
where $\alpha_k(N)$ are the Fourier coefficients of the function on the left-hand side (they depend on $\xi$, and $\alpha_0(N)=1$). Integration over $E$ gives
\[\begin{eqnarray}\int_E \prod_{k=1}^{N}\left(1+i\xi A_N^{-1}a_k\cos(n_k x)\right)dx=m(E)+\sum_{k=1}^{\infty}\alpha_k(N)\eta_k \quad \quad (6),\end{eqnarray}\]
where $\eta_k=\int_E \cos(kx)dx.$ We will show that $\alpha_k(N)=o(1)$ for each fixed $k$. Notice that the coefficient of $\cos(kx)$ is
\[\begin{eqnarray}O\left(\sum_{k_1<...<k_s\atop k=n_{k_1}+...+n_{k_s}} \gamma_{k_1}...\gamma_{k_s} \right)\quad \quad (7),\end{eqnarray}\]
where $\gamma_k=i\xi A_N^{-1}a_k$. Now we exploit lacunarity. The additional assumption $\lambda\geq 3$ guarantees that no number has two representations of the form $\pm n_{k_1}\pm...\pm n_{k_s}$. Indeed, two representations would imply
\[\begin{eqnarray}n_{k_s}+a_{s-1}n_{k_{s-1}}+...+a_1n_{k_1}=0\end{eqnarray}\]
for some $k_1<k_2<...<k_s$ and $a_i\in \{-2,-1,0,1,2\}.$ However,
\[\begin{eqnarray}&&n_{k_s}+a_{s-1}n_{k_{s-1}}+...+a_1n_{k_1}\\&>& (1-2\lambda^{-1}-2\lambda^{-2}-...)n_{k_s}\\&=&\frac{\lambda-3}{\lambda-1}n_{k_s}\geq 0\end{eqnarray}\]
if $\lambda\geq 3$, which is a contradiction.

Thus the sum in $(7)$ contains at most one term, and this term is $o(1)$ for any fixed $k$ since $\left|\frac{a_k}{A_k}\right|<\varepsilon$ for $k>K_{\varepsilon}$, so $\left|\frac{a_k}{A_N}\right|<\varepsilon$ for $k>K_{\varepsilon}$, and $\left|\frac{a_k}{A_N}\right|<\varepsilon$ for $k\leq K_{\varepsilon}$ if $N$ is large enough, since $A_N\to \infty$ as $N\to \infty.$ This proves $\alpha_k(N)=o(1)$, so the difference of $(6)$ from $m(E)$ is bounded by
\[\begin{eqnarray}o(1)+\sum_{k=L}^{\infty}|\alpha_k(N)||\eta_k|\end{eqnarray}\]
for any fixed $L$. By Cauchy-Schwarz, the sum above is at most
\[\begin{eqnarray}\left(\sum_{k=L}^{\infty}|\alpha_k(N)|^2\right)^{\frac{1}{2}}\left(\sum_{k=L}^{\infty}|\eta_k|^2\right)^{\frac{1}{2}}.\end{eqnarray}\]
Since $1_E$ is square integrable, the latter sum can be made arbitrarily small by choosing $L$ large enough. The numbers $\alpha_k(N)$ are the Fourier coefficients of the function $f_n$, where $f_n$ is the left-hand side of $(6)$. We know that the functions $f_n$ are uniformly bounded, so the $L^2$ norms of the functions $f_n$ are uniformly bounded, implying that $\sum_{k=L}^{\infty}|\alpha_k(N)|^2$ is uniformly bounded. In conclusion, the difference of $(6)$ from $m(E)$ is $o(1)$, and this is what we wanted to show. ■