Showing posts with label zeta function. Show all posts
Showing posts with label zeta function. Show all posts
Sunday, December 21, 2014
Ingham's Theorem on primes in short intervals
In ths post, we prove a theorem of Ingham, which states that there is always a prime on the short interval $[x,x+x^{\frac{5}{8}+\varepsilon}]$ when $x\geq M_{\varepsilon}$. More precisely, the theorem even gives an asymptotic formula for the number of such primes. It may seem a priori rather surprising that we are able to show the correct asymptotic for $\pi(x+x^{\theta})-\pi(x)$ for some numbers $\theta<1$ without being able to improve the error term in the prime number theorem to $x^{1-\varepsilon}$ for any $\varepsilon>0$. A crucial ingredient in the proof is relating the number of primes in short intervals to bounds for the number of zeros of the Riemann $\zeta$ function with real part at least $\sigma$ and imaginary part bounded by $T$. Even though we can by no means say that no zeros exist for $\sigma<1-\varepsilon$ for any $\varepsilon>0$, we still manage to prove that the number of zeros with real part at least $\sigma$ decays at least like $T^{A(1-\sigma)}\log^B T$ for some constants $A$ and $B$ so that zeros far from the critical line must be quite rare. The other crucial ingredient in the proof is in fact showing that bounds for $\zeta(\frac{1}{2}+it)$ lead to bounds for the number of zeros off from the critical line via the principle of argument, among other things. In particular, we will see that the truth of the Lindelöf hypothesis $\zeta(\frac{1}{2}+it)\ll t^{\varepsilon}$ for all $\varepsilon>0$ would imply that the intervals $[x,x+x^{\frac{1}{2}+\varepsilon}]$ contain primes for all $x\geq M_{\varepsilon}$. Of course, the truth of the Riemann hypothesis would imply that the shorter interval $[x,x+C\sqrt{x}\log^2 x]$ contains a prime for all large enough $x$, and Cramér's conjecture asserts that even intervals as short as $[x,x+K\log^2 x]$ contain primes for all $x$ when $K$ is large enough. The best unconditional result is due to Baker, Harman and Pintz, and says that the interval $[x,x+x^{0.525}]$ contains a prime for all large enough $x$.
Sunday, November 30, 2014
Van der Corput's inequality and related bounds
In this post, we prove several bounds for rather general exponential sums, depending on the growth of the derivative of their phase functions. We already proved in this post Vinogradov's bound for such sums under the assumption that some derivative of order $k\geq 4$ is suitably small. Here we prove similar but better bounds when the first, second or third derivative of the phase function is suitably small. We follow Titchmarsh's book The Theory of the Riemann Zeta Function and the book Van der Corput's Method of Exponential Sums by S. W. Graham and G. Kolesnik. The bounds depending on the derivatives enable us to estimate long zeta sums and hence complete the proof of the error term in the prime number theorem from the previous post. As a byproduct, we also get the Hardy-Littlewood bound
\[\begin{eqnarray}\zeta\left(\frac{1}{2}+it\right)\ll t^{\frac{1}{6}}\log t,\end{eqnarray}\]
for the zeta function on the critical line, which will be utilized in the next post in proving Ingham's theorem on primes in short intervals. Two more consequences of the Weyl differencing method, which will be applied to bounding exponential sums based on their derivatives, are the van der Corput inequality and a related equidistribution test. This test easily gives the uniform distribution of the fractional parts of polynomials, as we shall see.
Tuesday, November 25, 2014
Error term in the prime number theorem
We prove here an improved prime number theorem of the form
\[\begin{eqnarray}\pi(x)=Li(x)+O(x\exp(-c\log^{\frac{4}{7}}x)),\quad Li(x):=\int_{2}^{x}\frac{dt}{\log t},\end{eqnarray}\]
for all $c>0$, a result which is essentially due to Chudakov. The proof is based on bounding the growth of the Riemann zeta function in the critical strip near $\Re(s)=1$, and achieving this in turn builds on bounds for the zeta sums
\[\begin{eqnarray}\sum_{M\leq n\leq N}n^{-it}.\end{eqnarray}\]
To bound these sums, we use a bound for exponential sums with slowly growing phase from this post (Theorem 5 (i)). The proof of that bound was based on Vinogradov's mean value theorem. When $M$ is large enough, however, the theorem from the previous post is no longer helpful, and we must follow a different approach based on Weyl differencing and van der Corput's inequality, which is quite useful by itself. That approach will be postponed to the next post, and here we concentrate on short zeta sums and on deducing the stronger prime number theorem. We follow Chandrasekharan's book Arithmetical functions.
We remark that without relying on exponential sums, one has not been able to prove anything significantly better than $\pi(x)=Li(x)+O(x\exp(-c\log^{\frac{1}{2}}x))$, which was proved by de la Valleé Poussin already in 1899. On the other hand, the best known form of the prime number theorem is due to Vinogradov and Korobov, and it states that
\[\begin{eqnarray}\pi(x)=Li(x)+O(x\exp(-c_0\log^{\frac{3}{5}}x(\log \log x)^{-\frac{1}{5}}))\end{eqnarray}\] for some $c_0>0$; this was proved nearly 60 years ago in 1958.
Monday, September 15, 2014
Elementary estimates for prime sums
In this post, we prove several results about prime sums that are not much weaker than what follows from the prime number theorem, but the proofs are quite simple and use elementary methods (even the prime number theorem actually has a proof using elementary methods, by Selberg and Erdös, but the proof is not simple). One of the results we show is Chebychev's assertion that $\frac{\pi(x)\log x}{x}$ is bounded from below and above by positive constants (that are not very far from each other), and as a corollary we get Bertrand's postulate, telling that the interval $[n,2n]$ always contains a prime number. We also prove Mertens' three theorems about sums and products related to primes. These results are nearly as good as what the prime number theorem gives. We will need these results in later posts about sieve theory.
Friday, July 4, 2014
Fourier series and sum identities
We derive the basic properties of Fourier series in this post and apply them to prove some elegant sum and product identities, namely a formula for the value of the zeta function at even integers and the product formula for the sine function. As mentioned in the introductory post, we want to represent periodic functions (we may assume that the period is $1$ by scaling) that are integrable over the period in the form
\[\begin{eqnarray}f(x)\stackrel{?}{=}\sum_{n=-\infty}^{\infty}\hat{f}(n)e^{2\pi i n x}, \quad x \in \mathbb{R},\end{eqnarray}\]
where the Fourier coefficients for $n\in \mathbb{Z}$ are given by
\[\begin{eqnarray}\hat{f}(n)=\int_{0}^1 f(x)e^{-2\pi i x }dx, \quad n \in \mathbb{Z},\quad f\in L^1([0,1])\end{eqnarray}\]
(by $f\in L^1([0,1]),$ we mean that the restriction of $f$ to $[0,1]$ is Lebesgue integrable).
First of all, if $f$ is a trigonometric polynomial, say $f(x)=\sum_{n=-N}^Na_ne^{2\pi i n x}$, then
\[\begin{eqnarray}\int_ {0}^1 f(x)e^{-2\pi i m x}dx=\sum_{n=-N}^N\int_{0}^1 a_n e^{2\pi i n x}e^{-2\pi i m x}dx=a_m,\end{eqnarray}\]
and more generally whenever integration and summation can be interchanged, the coefficients $\hat{f}(n)$ offer the only possible way to represent $f$ (pointwise) as a trigonometric sum.
Some natural questions about Fourier series arise:
(1) When is a Fourier series convergent, and in what sense:
(1a) pointwise,
(1b) absolutely,
(1c) uniformly,
(1d) on average (in Cesàro sense),
(1e) in $L^p$ norms,
(1f) almost everywhere?
(2) How does smoothness or integrability of $f$ affect the size of the Fourier coefficients $\hat{f}(n)$?
(3) Do the Fourier coefficients $\hat{f}(n)$ characterize $f$ almost everywhere?
(4) Given a convergent trigonometric series, is it the Fourier series of some function?
(5) Does the Fourier series always converge to $f$ if it converges in the first place?
These have been important questions in Fourier analysis for centuries. Nowadays, quite general answers are known to these questions; especially between 1850's and 1960's, a huge amount of progress was made. We will discuss these questions in the following posts.
Subscribe to:
Comments (Atom)