Monday, July 28, 2014

Divergence of Fourier series

In this post, we discuss divergence results of Fourier series; this previous post was about convergence results. We showed earlier that quite general functions, such as Hölder continuous functions, have pointwise convergent Fourier series. It is natural to ask, how far one can push these results so that they remain true. In particular. the cases of arbitrary continuous and integrable functions are very natural to consider, and it turns out that these spaces admit counterexamples. Already in 1873, du Bois-Reymond constructed a continuous function whose Fourier series diverges at a point. The $L^1$-case is even more pathological, as Kolmogorov was able to produce in 1923 an $L^1$-function whose Fourier series diverges pointwise almost everywhere. In 1966, Kahane and Katznelson gave a strengthening of du Bois-Reymond's result: For any set $E\subset(0,1)$ of measure zero, there exits a function whose Fourier series diverges on $E$. This also complements nicely Carleson's result mentioned in the earlier post, since continuous periodic functions have pointwise almost everywhere convergent Fourier series, as they are $L^2$-functions.

We are going to prove the two first mentioned divergence results. Both du Bois-Reymond's and Kolmogorov's proofs are good examples of a general method to construct counterexamples in analysis: Take a set of functions each of which is ''closer'' to being a counter example than the previous ones, and use these functions to form a series that is pathological enough to give the desired counter example. This is also for example the method one usually uses to construct a nowhere differentiable continuous function. However, for du Bois-Reymond's result, we will apply the Banach-Steinhaus theorem, as it is perhaps clearer and emphasizes another important method to show the existence of counterexamples: Suppose there is no counterexample in the function space under consideration, and show that the lack of bad functions leads to too good properties for the space to be true. Another application of that principle would be the proof that there exists a sequence $(a_n)_{n=-\infty}^{\infty}$ converging to zero that is not the sequence of Fourier coefficients of any $L^1$-function (by the Riemann-Lebesgue lemma, the condition for $(a_n)$ is necessary). A drawback of such proofs is that they give no explicit example, but in practice, the constructive proofs are in many cases also not very explicit.

Sunday, July 27, 2014

Cyclotomic polynomials

In this post, we consider cyclotomic polynomials, which are a special class of polynomials with various connections to number theory, algebra, and even the constructability of regular polygons using ruler and compass. After deriving the basic properties of cyclotomic polynomials, we use them to give an elementary proof of the infinitude of primes $p\equiv 1 \pmod n$, as well as to determine which regular polygons are constructable. In a later post, we will also see Zsigmondy's theorem on the prime divisors of $a^n-b^n$ as an application of the properties of these polynomials. We assume mostly just basic properties of complex numbers, so the post is suitable for olympiad level.

The cyclotomic polynomials arise from a very simple equation: the equation $z^n=1$, whose complex roots, the $n$th roots of unity, are given by $1,e^{\frac{2\pi i}{n}},...,e^{\frac{2\pi i(n-1)}{n}}$, where $e^{ix}=\cos x +i\sin x$. The number $n$ will always be a positive integer. The roots form a regular $n$-gon in the complex plane. A root $\zeta$ of the equation $z^n=1$ is called a primitive $n$th root of unity if $\zeta^k\neq 1$ for $k<n$, or equivalently if the set $\{1,\zeta,...,\zeta^{n-1}\}$ contains all the roots of $z^n=1$. Then we define the $n$th cyclotomic polynomial as
\[\begin{eqnarray}\Phi_n(x):=\prod_{\zeta\in S_n}(x-\zeta),\end{eqnarray}\]
where $S_n$ is the set of primitive $n$th roots of unity. Clearly the primitive roots are $e^{\frac{2\pi i k}{n}},$ where $(k,n)=1$, so there are $\varphi(n)$ of them. Hence $\Phi_n(x)$ is a monic polynomial of degree $\varphi(n)$. If $\zeta$ is any $n$th root of unity, then $\zeta$ is in a unique $S_d$ where $d\leq n$. We have $d\mid n$ since if $n=kd+r$ with $0<r<d$, we would actually have $\zeta\in S_r$. This gives the important identity
\[\begin{eqnarray}x^n-1=\prod_{d\mid n}\Phi_d(x),\end{eqnarray}\]
since both sides are monic polynomials and the roots of the right-hand side are the elements of the sets $S_d$ for $d\mid n$ (there are no multiple roots as each root belongs to a unique $S_d$). This formula allows us for example to calculate cyclotomic polynomials explicitly. Obviously $\Phi_1(x)=x-1$, and then
\[\begin{eqnarray}\Phi_n(x)=\frac{x^n-1}{\prod'_{d\mid n} \Phi_d(x)},\end{eqnarray}\]
where $'$ denotes that the term $d=n$ is ignored. Hence each cyclotomic polynomial is determined by the previous ones. In particular, the following holds.

Wednesday, July 16, 2014

Convergence of Fourier series

In this post, we discuss some non-trivial convergence results for Fourier series. Convergence questions for Fourier series have been historically an important motivation for developing entire branches of analysis, such as Lebesgue integration theory and functional analysis. Fourier himself did not address convergence issues when considering Fourier series in 1807, but Dirichlet and Riemann already made significant progress around 1830-1860. Another early result was Dini's test ($\sim$ 1880) which for example implies pointwise convergence of Fourier series of any Hölder continuous function. A bit later, Fejér ($\sim$ 1900) showed that the averages of the Fourier partial sums convergence in the case of a continuous function. Shortly after that, in 1907, Riesz and Fischer proved an elegant theorem about $L^2$ convergence of Fourier series.

We are going to prove the theorems mentioned above. This post deals only with positive results on convergence, as negative results will be considered in a later post. One of course has to mention the marvelous results by Carleson from 1966 that the Fourier series of an $L^2$ function converges pointwise almost everywhere, and by Hunt from 1968 that the same folds for $L^p$ functions $1<p<\infty$. The proofs are notoriously difficult, and we will not address them.

Friday, July 4, 2014

Fourier series and sum identities

We derive the basic properties of Fourier series in this post and apply them to prove some elegant sum and product identities, namely a  formula for the value of the zeta function at even integers and the product formula for the sine function. As mentioned in the introductory post, we want to represent periodic functions (we may assume that the period is $1$ by scaling) that are integrable over the period in the form
\[\begin{eqnarray}f(x)\stackrel{?}{=}\sum_{n=-\infty}^{\infty}\hat{f}(n)e^{2\pi i n x}, \quad x \in \mathbb{R},\end{eqnarray}\]
where the Fourier coefficients for $n\in \mathbb{Z}$ are given by
\[\begin{eqnarray}\hat{f}(n)=\int_{0}^1 f(x)e^{-2\pi i x }dx, \quad n \in \mathbb{Z},\quad f\in L^1([0,1])\end{eqnarray}\]
(by $f\in L^1([0,1]),$ we mean that the restriction of $f$ to $[0,1]$ is Lebesgue integrable).
First of all, if $f$ is a trigonometric polynomial, say $f(x)=\sum_{n=-N}^Na_ne^{2\pi i n x}$, then
\[\begin{eqnarray}\int_ {0}^1 f(x)e^{-2\pi i m x}dx=\sum_{n=-N}^N\int_{0}^1 a_n e^{2\pi i n x}e^{-2\pi i m x}dx=a_m,\end{eqnarray}\]
and more generally whenever integration and summation can be interchanged, the coefficients $\hat{f}(n)$ offer the only possible way to represent $f$ (pointwise) as a trigonometric sum.

Some natural questions about Fourier series arise:
(1) When is a Fourier series convergent, and in what sense:
(1a) pointwise,
(1b) absolutely,
(1c) uniformly,
(1d) on average (in Cesàro sense),
(1e) in $L^p$ norms,
(1f) almost everywhere?
(2) How does smoothness or integrability of $f$ affect the size of the Fourier coefficients $\hat{f}(n)$?
(3) Do the Fourier coefficients $\hat{f}(n)$ characterize $f$ almost everywhere?
(4) Given a convergent trigonometric series, is it the Fourier series of some function?
(5) Does the Fourier series always converge to $f$ if it converges in the first place?
These have been important questions in Fourier analysis for centuries. Nowadays, quite general answers are known to these questions; especially between 1850's and 1960's, a huge amount of progress was made. We will discuss these questions in the following posts.

A very short introduction to Fourier analysis

In later posts, we consider properties of Fourier series (for instance, convergence and divergence results), as well as lacunary trigonometric series and applications of Fourier series to sum identities, combinatorics and number theory. After that, we discuss the Fourier transform in $\mathbb{R}^d$ and phenomena related to it (uncertainty principles, $L^2$ theory, tempered distributions), and applications to the central limit theorem and PDE.

In this post, we say a a bit about Fourier analysis in general. In classical Fourier analysis, we have a function $f:G\to \mathbb{C}$, where $G$ is is usually the integers modulo $N$, that is $\mathbb{Z}_N$, or the real numbers $\pmod 1$, that is $\mathbb{R}/ \mathbb{Z}$, or the whole Euclidean space $\mathbb{R}^d$ ($G$ can also be the integers $\mathbb{Z}$, but that situation is just dual to $G=\mathbb{R}/\mathbb{Z}$). More generally, in harmonic analysis, $G$ could be any locally compact abelian group, and then the analogue of an exponential function would be a homomorphism from $G$ to $\mathbb{C}$. We want to represent the function $f:G\to \mathbb{C}$ as a sum or integral (depending on the domain) of combinations of trigonometric functions. It turns out that a general function, with some mild regularity properties, has such a representation, so many ''linear'' problems concerning arbitrary functions can be reduced to corresponding problems for trigonometric functions (a ''linear'' problem could for instance be a linear PDE, such as the heat equation, which was Joseph Fourier's original motivation, or a problem about arithmetic progressions). We use the trigonometric exponentials $e^{2\pi in x}$ instead of sines and cosines, since otherwise we would have to do everything separately for them. Besides, cosines and sines can be recovered from trigonometric exponentials by taking real and imaginary parts. The representations we are looking for are
$\begin{eqnarray}f(n)&=&\sum_{k\in \mathbb{Z}_N}\hat{f}(k)e^{\frac{2\pi i k}{N}},\quad\quad\,\,\,\,\, \,f:\mathbb{Z}_N\to \mathbb{C}\\f(x)&=&\sum_{n=-\infty}^{\infty}\hat{f}(n)e^{2\pi i n x},\quad \quad f:\mathbb{R}/\mathbb{Z}\to \mathbb{C}\\f(x)&=&\int_{\mathbb{R}^d}\hat{f}(\xi)e^{2\pi i x\cdot\xi}d\xi,\quad \quad f:\mathbb{R}^d\to \mathbb{C},\end{eqnarray}$
where $x\cdot \xi$ is the inner product of two vectors from $\mathbb{R}^d$.
The functions $\hat{f}$ are some coefficient functions, but not just any functions, as these are called the Fourier transform in the continuous case and Fourier coefficients in the discrete case. From the orthogonality relation
$\begin{eqnarray}\int_{0}^{1}e^{2\pi i (m-n)x}dx=\begin{cases} 0, \quad m \neq n\\ 1,\quad m=n,\end{cases}\end{eqnarray}$
(the name comes from the fact that this relation says that the pairwise inner products of distinct functions $e^{2\pi in x}$ are $0$) or from its equivalent for sums it follows, by multiplying both sides of the three equations above by exponentials and integrating (or summing) both sides, that if we formally change the order of integration (or summation), we arrive at
$\begin{eqnarray}\hat{f}(n)&=&\frac{1}{N}\sum_{k\in \mathbb{Z}_N}f(k)e^{-\frac{2\pi i k}{N}},\quad \quad n \in \mathbb{Z}_N\\\hat{f}(n)&=&\int_{0}^1 f(x)e^{-2\pi i n x}dx,\quad\,\,\,\,\,\, n \in \mathbb{Z}\\ \hat{f}(\xi)&=&\int_{\mathbb{R}^d}f(x)e^{-2\pi i x \cdot\xi}dx,\quad \,\,\,\xi \in \mathbb{R}^d\end{eqnarray}$
(in the first case, there is no problem in changing the order of summations). By definition, this is called a Fourier coefficient of $f$ (or the Fourier transform of $f$ in the third case). Notice that these three formulas resemble very much the three representation formulas given above, so the Fourier transform is essentially its own dual operation, and this is of course an important fact. Next we say a few words about the properties of these transforms.

Wednesday, July 2, 2014

Primitive roots and their applications

In this post we prove the existence of primitive roots modulo prime powers, using elementary number theory and presenting numerous problems in which primitive roots can be applied. This post is suitable for instance for olympiad training.

Introduction


When doing modular arithmetic modulo $n$, whose prime factorization is $p_1^{\alpha_1}...p_k^{\alpha_k},$ it is sensible to consider each of the moduli $p_i^{\alpha_i}$ separately. Indeed, the Chinese remainder theorem guarantees that if we solve a congruence equation modulo each of these prime powers, we can recover the original solution $\pmod n$. This is not only numerically a lot more effective, but also shows that any polynomial congruence problem in $\mathbb{Z}_n$ (the integers $\pmod n$) is reduced to understanding the structure of $\mathbb{Z}_{p^{\alpha}}$ for prime powers (we will encounter this fact many times).

The basic operations are easy to perform in $\mathbb{Z}_{p^{\alpha}}$. Three of them are no harder than in the ordinary integers, and division is also quite easy in $\mathbb{Z}_{p^{\alpha}}$ since the congruence $ax\equiv b \pmod{p^{\alpha}}$ has a unique solution, denoted by $x\equiv \frac{b}{a} \pmod{p^{\alpha}}$, if and only if $a$ and $p$ are coprime. This solution is given by Euclid's algorithm in an effective way. Therefore, it is natural to ask what is an effective way to take roots modulo a prime power or to solve exponential congruences.

The answer lies in exploiting the existence of a primitive root $g$ modulo $p^{\alpha}$, which is an integer such that its powers $g^k$ are noncongruent for $0\leq k<\varphi(p^{\alpha})$ ($\varphi(\cdot)$ is Euler's totient function). By Euler's theorem, $g^{\varphi(q)}\equiv 1 \pmod q$, so a primitive root is an integer whose powers repeat themselves as rarely as possible. Clearly every residue class coprime to $p$ is given by a power of $g$, so if $x\equiv g^{y}\pmod{p^{\alpha}},$ the equation $x^k\equiv a \pmod{p^{\alpha}}$ becomes $g^{ky}\equiv a \pmod{p^{\alpha}}$. In order to solve this new equation, we just need to determine which power of $g$ is congruent to $a$ (the answer is unique modulo $\varphi(p^{\alpha})$), and then perform a division, which is easy. Similarly, if we write $a\equiv g^m \pmod{p^{\alpha}}$, the equation $a^x \equiv b \pmod{p^{\alpha}}$ reduces to $g^{mx}\equiv b \pmod{p^{\alpha}}$, which can be solved in the same way as the former congruence. Hence, primitive roots work much in the same way as logarithms do in $\mathbb{R}$, but they are much more interesting, since they are by no means trivial despite their usefulness.

To appreciate primitive roots, it is good to know that before calculators people computed using logarithm tables. This is because all the basic operations are reduced to easier ones by taking logarithms, so one only needs a table of the values of the logarithm. However, in modular arithmetic, calculators have not replaced discrete logarithms, which are based on primitive roots, since the numbers that one computes with can be enormous (for example, one may want to compute $2^n\pmod m$, where $n$ is some number greater than one million, say), while the results are of reasonable size, namely smaller than the modulus. The fact that computing discrete logarithms, or equivalently finding primitive roots, is hard is the basis of many cryptosystems that are used for secure data transfer.