Tuesday, December 30, 2014

Uncertainty principles in Fourier analysis

Uncertainty principles in Fourier analysis are formalizations of the following principle: A function $f$ and its Fourier transform $\hat{f}$ cannot both be sharply ''localized''. This of course allows many interpretations, and by interpreting localization in various ways, we get theorems (usually inequalities) about the simultaneous behavior of $f$ and $\hat{f}$. We will formalize and prove the following uncertainty principles.
  • $|f|^2$ and $|\hat{f}|^2$ cannot both have small variance (Heisenberg uncertainty principle)
  •  Two self-adjoint operators cannot both have small norms compared to the norm of their commutator (Heisenberg uncertainty principle for operators)
  •  $f$ and $\hat{f}$ cannot both decay faster than gaussian functions (Hardy's uncertainty principle)
  • Not both $f$ and $\hat{f}$ can be nonzero only in a set of finite measure (Benedick's inequality)
  • The entropy of $f$ and $\hat{f}$ cannot both be positive (Hirschman's inequality)
The first hint that such principles might be true is that the Fourier transform ''spreads out'' a function the more it is concentrated: $\mathcal{F}(f(tx))=\frac{1}{t}\hat{f}(\frac{\xi}{t})$. We now proceed to formalize and prove the principles.

Monday, December 29, 2014

The large sieve and the Bombieri-Vinogradov inequality

In this post, we will derive the large sieve, which is one of the most classical sieves. The large sieve is quite different from combinatorial sieves, such as the sieve of Eratosthenes, Brun sieve and Selverg sieve (discussed in these posts 1, 2, 3, 4), in the sense that instead of combinatorial arguments, the large sieve stems from analytic inequalities that are not very far from harmonic analysis. Another important difference is that, as the name indicates, the large sieve is most useful when we are sifting a large proportion of the residue classes modulo primes, unlike the combinatorial sieves in which usually only a bounded amount of residue classes are sifted. The large sieve can, and will be, formulated as asserting square root cancellation in quadratic means of exponential sums, where the phase runs over rationals with denominator at most $z$, but we will also present an arithmetic version concerning the number of residue classes a set can occupy modulo each prime, given the size of the set (which is what combinatorial sieves are also about). One more formulation involves bilinear sums of Dirichlet characters, and we use it, together with Vaughan's identity (see this post) and the Pólya-Vinogradov inequality (see this post), to prove the Bombieri-Vinogradov theorem. Finally, we apply the Bombieri-Vinogradov theorem to the solution of the Titchmarsh divisor problem, which asks for the average number of prime divisors of $p+a$ for a given $a$, where $p$ runs through the primes up to $x$. We follow the book An Introduction to Sieve Methods and Their Applications by Cojocaru and Murty.

Sunday, December 21, 2014

Ingham's Theorem on primes in short intervals

In ths post, we prove a theorem of Ingham, which states that there is always a prime on the short interval $[x,x+x^{\frac{5}{8}+\varepsilon}]$ when $x\geq M_{\varepsilon}$. More precisely, the theorem even gives an asymptotic formula for the number of such primes. It may seem a priori rather surprising that we are able to show the correct asymptotic for $\pi(x+x^{\theta})-\pi(x)$ for some numbers $\theta<1$ without being able to improve the error term in the prime number theorem to $x^{1-\varepsilon}$ for any $\varepsilon>0$. A crucial ingredient in the proof is relating the number of primes in short intervals to bounds for the number of zeros of the Riemann $\zeta$ function with real part at least $\sigma$ and imaginary part bounded by $T$. Even though we can by no means say that no zeros exist for $\sigma<1-\varepsilon$ for any $\varepsilon>0$, we still manage to prove that the number of zeros with real part at least $\sigma$ decays at least like $T^{A(1-\sigma)}\log^B T$ for some constants $A$ and $B$ so that zeros far from the critical line must be quite rare. The other crucial ingredient in the proof is in fact showing that bounds for $\zeta(\frac{1}{2}+it)$ lead to bounds for the number of zeros off from the critical line via the principle of argument, among other things. In particular, we will see that the truth of the Lindelöf hypothesis $\zeta(\frac{1}{2}+it)\ll t^{\varepsilon}$ for all $\varepsilon>0$ would imply that the intervals $[x,x+x^{\frac{1}{2}+\varepsilon}]$ contain primes for all $x\geq M_{\varepsilon}$. Of course, the truth of the Riemann hypothesis would imply that the shorter interval $[x,x+C\sqrt{x}\log^2 x]$ contains a prime for all large enough $x$, and Cramér's conjecture asserts that even intervals as short as $[x,x+K\log^2 x]$ contain primes for all $x$ when $K$ is large enough. The best unconditional result is due to Baker, Harman and Pintz, and says that the interval $[x,x+x^{0.525}]$ contains a prime for all large enough $x$.