In this post, we say a a bit about Fourier analysis in general. In classical Fourier analysis, we have a function $f:G\to \mathbb{C}$, where $G$ is is usually the integers modulo $N$, that is $\mathbb{Z}_N$, or the real numbers $\pmod 1$, that is $\mathbb{R}/ \mathbb{Z}$, or the whole Euclidean space $\mathbb{R}^d$ ($G$ can also be the integers $\mathbb{Z}$, but that situation is just dual to $G=\mathbb{R}/\mathbb{Z}$). More generally, in harmonic analysis, $G$ could be any locally compact abelian group, and then the analogue of an exponential function would be a homomorphism from $G$ to $\mathbb{C}$. We want to represent the function $f:G\to \mathbb{C}$ as a sum or integral (depending on the domain) of combinations of trigonometric functions. It turns out that a general function, with some mild regularity properties, has such a representation, so many ''linear'' problems concerning arbitrary functions can be reduced to corresponding problems for trigonometric functions (a ''linear'' problem could for instance be a linear PDE, such as the heat equation, which was Joseph Fourier's original motivation, or a problem about arithmetic progressions). We use the trigonometric exponentials $e^{2\pi in x}$ instead of sines and cosines, since otherwise we would have to do everything separately for them. Besides, cosines and sines can be recovered from trigonometric exponentials by taking real and imaginary parts. The representations we are looking for are

$\begin{eqnarray}f(n)&=&\sum_{k\in \mathbb{Z}_N}\hat{f}(k)e^{\frac{2\pi i k}{N}},\quad\quad\,\,\,\,\, \,f:\mathbb{Z}_N\to \mathbb{C}\\f(x)&=&\sum_{n=-\infty}^{\infty}\hat{f}(n)e^{2\pi i n x},\quad \quad f:\mathbb{R}/\mathbb{Z}\to \mathbb{C}\\f(x)&=&\int_{\mathbb{R}^d}\hat{f}(\xi)e^{2\pi i x\cdot\xi}d\xi,\quad \quad f:\mathbb{R}^d\to \mathbb{C},\end{eqnarray}$

where $x\cdot \xi$ is the inner product of two vectors from $\mathbb{R}^d$.

where $x\cdot \xi$ is the inner product of two vectors from $\mathbb{R}^d$.

The functions $\hat{f}$ are some coefficient functions, but not just any functions, as these are called the Fourier transform in the continuous case and Fourier coefficients in the discrete case. From the orthogonality relation

$\begin{eqnarray}\int_{0}^{1}e^{2\pi i (m-n)x}dx=\begin{cases} 0, \quad m \neq n\\ 1,\quad m=n,\end{cases}\end{eqnarray}$

(the name comes from the fact that this relation says that the pairwise inner products of distinct functions $e^{2\pi in x}$ are $0$) or from its equivalent for sums it follows, by multiplying both sides of the three equations above by exponentials and integrating (or summing) both sides, that if we formally change the order of integration (or summation), we arrive at

$\begin{eqnarray}\hat{f}(n)&=&\frac{1}{N}\sum_{k\in \mathbb{Z}_N}f(k)e^{-\frac{2\pi i k}{N}},\quad \quad n \in \mathbb{Z}_N\\\hat{f}(n)&=&\int_{0}^1 f(x)e^{-2\pi i n x}dx,\quad\,\,\,\,\,\, n \in \mathbb{Z}\\ \hat{f}(\xi)&=&\int_{\mathbb{R}^d}f(x)e^{-2\pi i x \cdot\xi}dx,\quad \,\,\,\xi \in \mathbb{R}^d\end{eqnarray}$

(in the first case, there is no problem in changing the order of summations). By definition, this is called a Fourier coefficient of $f$ (or the Fourier transform of $f$ in the third case). Notice that these three formulas resemble very much the three representation formulas given above, so the Fourier transform is essentially its own dual operation, and this is of course an important fact. Next we say a few words about the properties of these transforms.

In Fourier analysis, one wants to understand the properties of these transforms and their relations to the properties of the original function, and in particular determine general conditions for the validity of the representation of the function using the Fourier transform or coefficients. Moreover, there are many operations and properties of functions that become nicer on the ''Fourier side''; below is a table for the case of $\mathbb{R}^d$. The table shows that the Fourier transform can be used to exchange various operations to other operations, which makes it so useful in applications.

Property or operation for $f$ | Corresponding property or operation for $\hat{f}$ |
---|---|

integrability | convergence to zero |

square integrability | square integrability |

polynomial decay | differentiability |

faster than polynomial decay | infinite differentiability |

faster than exponential decay | analyticity |

differentiation | multiplication by a polynomial |

convolution with a function $g$ | multiplication by $\hat{g}$ |

scaling by a factor $a$ | scaling by a factor $\frac{1}{a}$ |

translation | multiplication by an exponential |

evenness | realness |

There are also many other properties, and we return to these later. Many of these implications also work in the other direction, so for example differentiability of $f$ corresponds to decay of $\hat{f}$. The table above already shows the usefulness in problems related to PDEs (differential equations become polynomial equations) or sums of independent random variables, among others (the distribution of a sum of independent random variables is the convolution of the individual distributions). In addition, the scaling property can be seen as a very weak version of uncertainty principles, such as the Heisenberg uncertainty principle, which state that a function $f$ and $\hat{f}$ cannot both be very ''localized''. One more important remark about the table above is that in the case of $L^2$ functions, Fourier transforms (and series) are ''perfect''; we have no convergence issues (when we use the $L^2$ norm; we return to this topic later).