We return to the study of the Riemann zeta function , focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for , which in turn lead to improved estimates for the error term in the prime number theorem.

in this region. In particular, if and then we had . Using the functional equation and the Hadamard three lines lemma, we can improve this to ; see Supplement 3.

Now we seek better upper bounds on . We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 33 of Supplement 3:

Proposition 1 Let with and . Then

where .

Proof: We fix a smooth function with for and for , and allow implied constants to depend on . Let with . From Exercise 33 of Supplement 3, we have

for some sufficiently large absolute constant . By dyadic decomposition, we thus have

We can absorb the first term in the second using the case of the supremum. Writing , where

it thus suffices to show that

for each . But from the fundamental theorem of calculus, the left-hand side can be written as

and the claim then follows from the triangle inequality and a routine calculation.

We are thus interested in getting good bounds on the sum . More generally, we consider normalised exponential sums of the form

where is an interval of length at most for some , and is a smooth function. We will assume smoothness estimates of the form

for some , all , and all , where is the -fold derivative of ; in the case , of interest for the Riemann zeta function, we easily verify that these estimates hold with . (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative of directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to .

and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of for (2) for any if ; this assertion (a special case of the exponent pair hypothesis) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form when and depends on . We present two such results here, which perform well for small and large values of respectively:

Theorem 2 Let , let be an interval of length at most , and let be a smooth function obeying (3) for all and .

(i) (van der Corput estimate) For any natural number , one has

(ii) (Vinogradov estimate) If is a natural number and , then

for some absolute constant .

The factor of can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of . The estimate (6) is superior to (5) when for large, since (after optimising in ) (5) gives a gain of the form over the trivial bound, while (6) gives . We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on , and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.

We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating like a polynomial of degree roughly ; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a bilinear function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about times, which is why the gain is only exponentially small in . The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of – by using Hölder’s inequality in combination with the fact that the discrete curve becomes roughly equidistributed in the box after taking the sumset of about copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “-adic” version of this equidistribution to reduce the claim at a given scale to a smaller scale with , and then proceeding by induction.

One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:

Exercise 3 (Subconvexity bound)

(i) Show that for all . (Hint: use the case of the Van der Corput estimate.)

(ii) For any , show that as .

Exercise 4 Let be such that , and let .

(i) (Littlewood bound) Use the van der Corput estimate to show that whenever .

(ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that whenever .

As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region , which in turn leads to the prime number theorem with error term

for . If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region

(which is only slightly wider than the classical zero-free region) and an error term

in the prime number theorem.

Exercise 5 (Vinogradov-Korobov in arithmetic progressions) Let be a non-principal character of modulus .

(i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that whenever and

(Hint: use the Vinogradov estimate and a change of variables to control for various intervals of length at most and residue classes , in the regime (say). For , do not try to capture any cancellation and just use the triangle inequality instead.)

(ii) Obtain a zero-free region

for , for some (effective) absolute constant .

(iii) Obtain the prime number theorem in arithmetic progressions with error term

Theorem 1 (Furstenberg-Sarkozy theorem) Let , and suppose that is sufficiently large depending on . Then every subset of of density at least contains a pair for some natural numbers with .

This theorem is of course similar in spirit to results such as Roth’s theorem or Szemerédi’s theorem, in which the pattern is replaced by or for some fixed respectively. There are by now many proofs of this theorem (see this recent paper of Lyall for a survey), but most proofs involve some form of Fourier analysis (or spectral theory). This may be compared with the standard proof of Roth’s theorem, which combines some Fourier analysis with what is now known as the density increment argument.

A few years ago, Ben Green, Tamar Ziegler, and myself observed that it is possible to prove the Furstenberg-Sarkozy theorem by just using the Cauchy-Schwarz inequality (or van der Corput lemma) and the density increment argument, removing all invocations of Fourier analysis, and instead relying on Cauchy-Schwarz to linearise the quadratic shift . As such, this theorem can be considered as even more elementary than Roth’s theorem (and its proof can be viewed as a toy model for the proof of Roth’s theorem). We ended up not doing too much with this observation, so decided to share it here.

The first step is to use the density increment argument that goes back to Roth. For any , let denote the assertion that for sufficiently large, all sets of density at least contain a pair with non-zero. Note that is vacuously true for . We will show that for any , one has the implication

for some absolute constant . This implies that is true for any (as can be seen by considering the infimum of all for which holds), which gives Theorem 1.

It remains to establish the implication (1). Suppose for sake of contradiction that we can find for which holds (for some sufficiently small absolute constant ), but fails. Thus, we can find arbitrarily large , and subsets of of density at least , such that contains no patterns of the form with non-zero. In particular, we have

(The exact ranges of and are not too important here, and could be replaced by various other small powers of if desired.)

Let be the density of , so that . Observe that

and

If we thus set , then

In particular, for large enough,

On the other hand, one easily sees that

and hence by the Cauchy-Schwarz inequality

which we can rearrange as

Shifting by we obtain (again for large enough)

In particular, by the pigeonhole principle (and deleting the diagonal case , which we can do for large enough) we can find distinct such that

so in particular

If we set and shift by , we can simplify this (again for large enough) as

or equivalently has density at least on the arithmetic progression , which has length and spacing , for some absolute constant . By partitioning this progression into subprogressions of spacing and length (plus an error set of size , we see from the pigeonhole principle that we can find a progression of length and spacing on which has density at least (and hence at least ) for some absolute constant . If we then apply the induction hypothesis to the set

we conclude (for large enough) that contains a pair for some natural numbers with non-zero. This implies that lie in , a contradiction, establishing the implication (1).

A more careful analysis of the above argument reveals a more quantitative version of Theorem 1: for (say), any subset of of density at least for some sufficiently large absolute constant contains a pair with non-zero. This is not the best bound known; a (difficult) result of Pintz, Steiger, and Szemeredi allows the density to be as low as . On the other hand, this already improves on the (simpler) Fourier-analytic argument of Green that works for densities at least (although the original argument of Sarkozy, which is a little more intricate, works up to ). In the other direction, a construction of Rusza gives a set of density without any pairs .

Remark 1 A similar argument also applies with replaced by for fixed , because this sort of pattern is preserved by affine dilations into arithmetic progressions whose spacing is a power. By re-introducing Fourier analysis, one can also perform an argument of this type for where is the sum of two squares; see the above-mentioned paper of Green for details. However there seems to be some technical difficulty in extending it to patterns of the form for polynomials that consist of more than a single monomial (and with the normalisation , to avoid local obstructions), because one no longer has this preservation property.

For commenters

To enter in LaTeX in comments, use $latex <Your LaTeX code>$ (without the < and > signs, of course; in fact, these signs should be avoided as they can cause formatting errors). See the about page for details and for other commenting policy.