for some function that went to infinity as ; this improved upon previous work of Rankin and other authors, who established the same bound but with replaced by a constant. (Again, see the previous post for a more detailed discussion.)

In our previous papers, we did not specify a particular growth rate for . In my paper with Kevin, Ben, and Sergei, there was a good reason for this: our argument relied (amongst other things) on the inverse conjecture on the Gowers norms, as well as the Siegel-Walfisz theorem, and the known proofs of both results both have ineffective constants, rendering our growth function similarly ineffective. Maynard’s approach ostensibly also relies on the Siegel-Walfisz theorem, but (as shown in another recent paper of his) can be made quite effective, even when tracking -tuples of fairly large size (about for some small ). If one carefully makes all the bounds in Maynard’s argument quantitative, one eventually ends up with a growth rate of shape

thus leading to a bound

on the gaps between primes for large ; this is an unpublished calculation of James’.

In this paper we make a further refinement of this calculation to obtain a growth rate

leading to a bound of the form

for large and some small constant . Furthermore, this appears to be the limit of current technology (in particular, falling short of Cramer’s conjecture that is comparable to ); in the spirit of Erdös’ original prize on this problem, I would like to offer 10,000 USD for anyone who can show (in a refereed publication, of course) that the constant here can be replaced by an arbitrarily large constant .

The reason for the growth rate (3) is as follows. After following the sieving process discussed in the previous post, the problem comes down to something like the following: can one sieve out all (or almost all) of the primes in by removing one residue class modulo for all primes in (say) ? Very roughly speaking, if one can solve this problem with , then one can obtain a growth rate on of the shape . (This is an oversimplification, as one actually has to sieve out a random subset of the primes, rather than all the primes in , but never mind this detail for now.)

Using the quantitative “dense clusters of primes” machinery of Maynard, one can find lots of -tuples in which contain at least primes, for as large as or so (so that is about ). By considering -tuples in arithmetic progression, this means that one can find lots of residue classes modulo a given prime in that capture about primes. In principle, this means that union of all these residue classes can cover about primes, allowing one to take as large as , which corresponds to (3). However, there is a catch: the residue classes for different primes may collide with each other, reducing the efficiency of the covering. In our previous papers on the subject, we selected the residue classes randomly, which meant that we had to insert an additional logarithmic safety margin in expected number of times each prime would be shifted out by one of the residue classes, in order to guarantee that we would (with high probability) sift out most of the primes. This additional safety margin is ultimately responsible for the loss in (2).

The main innovation of this paper, beyond detailing James’ unpublished calculations, is to use ideas from the literature on efficient hypergraph covering, to avoid the need for a logarithmic safety margin. The hypergraph covering problem, roughly speaking, is to try to cover a set of vertices using as few “edges” from a given hypergraph as possible. If each edge has vertices, then one certainly needs at least edges to cover all the vertices, and the question is to see if one can come close to attaining this bound given some reasonable uniform distribution hypotheses on the hypergraph . As before, random methods tend to require something like edges before one expects to cover, say of the vertices.

However, it turns out (under reasonable hypotheses on ) to eliminate this logarithmic loss, by using what is now known as the “semi-random method” or the “Rödl nibble”. The idea is to randomly select a small number of edges (a first “nibble”) – small enough that the edges are unlikely to overlap much with each other, thus obtaining maximal efficiency. Then, one pauses to remove all the edges from that intersect edges from this first nibble, so that all remaining edges will not overlap with the existing edges. One then randomly selects another small number of edges (a second “nibble”), and repeats this process until enough nibbles are taken to cover most of the vertices. Remarkably, it turns out that under some reasonable assumptions on the hypergraph , one can maintain control on the uniform distribution of the edges throughout the nibbling process, and obtain an efficient hypergraph covering. This strategy was carried out in detail in an influential paper of Pippenger and Spencer.

In our setup, the vertices are the primes in , and the edges are the intersection of the primes with various residue classes. (Technically, we have to work with a family of hypergraphs indexed by a prime , rather than a single hypergraph, but let me ignore this minor technical detail.) The semi-random method would in principle eliminate the logarithmic loss and recover the bound (3). However, there is a catch: the analysis of Pippenger and Spencer relies heavily on the assumption that the hypergraph is uniform, that is to say all edges have the same size. In our context, this requirement would mean that each residue class captures exactly the same number of primes, which is not the case; we only control the number of primes in an average sense, but we were unable to obtain any concentration of measure to come close to verifying this hypothesis. And indeed, the semi-random method, when applied naively, does not work well with edges of variable size – the problem is that edges of large size are much more likely to be eliminated after each nibble than edges of small size, since they have many more vertices that could overlap with the previous nibbles. Since the large edges are clearly the more useful ones for the covering problem than small ones, this bias towards eliminating large edges significantly reduces the efficiency of the semi-random method (and also greatly complicates the analysis of that method).

Our solution to this is to iteratively reweight the probability distribution on edges after each nibble to compensate for this bias effect, giving larger edges a greater weight than smaller edges. It turns out that there is a natural way to do this reweighting that allows one to repeat the Pippenger-Spencer analysis in the presence of edges of variable size, and this ultimately allows us to recover the full growth rate (3).

To go beyond (3), one either has to find a lot of residue classes that can capture significantly more than primes of size (which is the limit of the multidimensional Selberg sieve of Maynard and myself), or else one has to find a very different method to produce large gaps between primes than the Erdös-Rankin method, which is the method used in all previous work on the subject.

It turns out that the arguments in this paper can be combined with the Maier matrix method to also produce chains of consecutive large prime gaps whose size is of the order of (4); three of us (Kevin, James, and myself) will detail this in a future paper. (A similar combination was also recently observed in connection with our earlier result (1)by Pintz, but there are some additional technical wrinkles required to recover the full gain of (3) for the chains of large gaps problem.)

Hmm. One could try simply applying our covering theorem off-the-shelf (Corollary 1 from our paper), but it looks like the hypergraph here is already uniform, so our theorem won’t add much power beyond what existing tools give; we do allow the edges to grow somewhat in size, but perhaps not as fast as in that application. But one would have to compute all the parameters carefully to see for certain whether our version of the theorem gets any further than previous ones with regards to what values of k and t it can handle. (We also didn’t try very hard to optimise the bound on codegree needed, since in our application the codegree ends up being very small.)

Yes, we manage to avoid the use of the (ineffective) Siegel-Walfisz theorem by deleting an exceptional prime from the multidimensional Selberg sieve, leading to an effective (but quite small) value of c.

It depends on how large one can make in terms of the order of magnitude of the primes being considered (and in the Erdos-Rankin method, x is of the order of ). Roughly speaking, the quantity in (1) is then the largest one can make . With current technology, can be as large as and as large as , which is why we can make as large as . (In our previous papers we took to be a large constant independent of , but this will not create a that grows with .)

If one assumes a sufficiently strong version of the prime tuples conjecture, then one could conceivably take as large as , in which case becomes . This is likely to be the limit of the Erdos-Rankin method (from probabilistic heuristics, one should not expect to ever get larger than if one is to capture primes of size ).

There are some typos and areas for small improvements in clarity and exposition (e.g. “that are all either at least log 20 x .”). Would you like me to send the corrections and suggestions to you in bulk via email, or would you prefer them piecemeal here as comments?

Ok. In your section 1, isn’t Y(x) just j(P(x))-1 [ or h(x) – 1, to alter notation from Hagedorn’s 2009, or even L(P(x)), using Jacobsthal’s L(x) from his 1960 paper] ? I may have got Y(x) wrong, but if not, I see some of the paragraphs in that section that could be made more clear. Also, in defining a arrow right after (3.5), I think you want a script S for the index set, not the plain S in “s \in S”.

That wants the gap size? … The voids or gaps between two prime numbers can not be determined, in fact, are as large as the largest prime number known to date, but still, this big hole is repeated indefinitely as you all gap you can find up to this prime number. If tomorrow is a much higher prime number, say a million million digits then there is a gap of that size and many of twice that size … and so …. I can prove it. A gap double in size depends on two numbers in the middle of two consecutive gap of the same size that can be twins or one cousin cousins ​​and not the other or both compounds, in this case the gap doubles.

For commenters

To enter in LaTeX in comments, use $latex <Your LaTeX code>$ (without the < and > signs, of course; in fact, these signs should be avoided as they can cause formatting errors). See the about page for details and for other commenting policy.