Archive for September, 2010

After a long initial discussion period, Polymath3, a project on Gil Kalai’s blog that aims to solve the polynomial Hirsch conjecture, has now started as an active research project. There is already quite a lot of material on his blog, and soon some of it should have migrated to a wiki, which will be a good place to get up to speed on the basics. I will update this post from time to time with news of how the project is going.

Something that is maybe worth pointing out is that although the problem looks at first as though you need to know all sorts of facts about convexity, there is a very nice purely combinatorial statement (about set systems) that would imply the conjecture. So there is no excuse not to think about it …

This is a post I’ve been meaning to write for some time. As most readers will know, at the very end of 2008 Alex Frolkin, Olof Sisask and I started the Tricki, a wiki-like website where people could post articles about mathematical techniques. The hope was that after the site had reached some level of completeness, it would be possible to take a mathematics research problem (or subproblem) and search efficiently for known techniques that were likely to be relevant. It would be doing something a little different from Wikipedia mathematics articles, which concentrate more on what I like to think of as “things with names”. For instance, if you suspect that discrete Fourier analysis is likely to be a useful tool for your problem, then you can type “discrete Fourier analysis” into Google and find many links, including to a Wikipedia article that contains many of the basic facts. But what if it doesn’t occur to you that discrete Fourier analysis is what you need (even though it in fact is)? The idea was that, using easily identifiable features of your problem, you would be able to follow links to ever more specific Tricki pages until you would reach a page that explained when discrete Fourier analysis was useful and how it was used. In general, the whole site would be about how to do mathematics rather than about lists of facts.(more…)

The situation we are now in with EDP is roughly this. We have a statement that would imply EDP and that seems easier to prove because it is a reasonably simple existential statement (there exists a decomposition with certain properties) rather than a universal statement (every function has unbounded discrepancy on HAPs). However, the existential statement seems itself to be hard, despite one’s initial expectation that it ought to be easier. So what do we do?

One tool that we have at our disposal is duality, which is what we used to convert the problem to an existential one in the first place. Now obviously we don’t want to apply duality twice and end up with the original problem, but, perhaps surprisingly, there are ways that applying duality twice could be useful.

Here are two such ways. The first is that you prove that a certain kind of decomposition would be sufficient to prove EDP. Then you argue that if such a decomposition exists, then a more restricted kind of decomposition must also exist. Dualizing again, one ends up with a new discrepancy problem that is different from the original one (though it will imply it). The second way is this: if it is not easy to write down a decomposition that works, then one wants to narrow down the search space. And one way of doing that is to prove rigorously that certain kinds of decompositions do not exist. And an efficient way of doing that is to use duality: that is, one finds a function with low discrepancy on the class of sets that one was hoping to use for the decomposition. Since this class is restricted, solving the discrepancy problem is easier than solving EDP (but this time it doesn’t imply EDP).

We have already had an example of the first use of dualizing twice. In this post I want to give in detail an example of the second.(more…)

Consider the problem of showing that the product of any consecutive positive integers (or indeed any integers, not that that is a significant extension) is divisible by I think the proof that most experienced mathematicians would give is the slick one that divided by is and so is the number of ways of choosing objects from objects. Since the latter must be an integer, so must the former.

One might argue that this is not a complete proof because one must show that really is the number of ways of choosing objects from but that is not hard to do.(more…)

I think this will be a bit long for a comment, so I’ll make it a post instead. I want to try to say as clearly as I can (which is not 100% clearly) what we know about a certain way of constructing a decomposition of the identity on Recall from the last post or two that what we want to do is this. Define a square in to be a set of the form where by I mean the set of all positive integers such that Let us identify sets with their characteristic functions. We are trying to find, for any constant a collection of squares and some coefficients with the following properties.

where and is the number of points in the interval that defines or, more relevantly, the number of points in the intersection of with the main diagonal of

Let Then for any pair of coprime positive integers we have

The second condition tells us that the off-diagonal elements of the matrix you get when you convert the decomposition into a matrix indexed by are all zero, and the first condition tells us that we have an efficient decomposition in the sense that we care about. In my previous post I showed why obtaining a collection of squares for a constant implies that the discrepancy of an arbitrary sequence is at least In this post I want to discuss some ideas for constructing such a system of squares and coefficients. I’ll look partly at ideas that don’t work, so that we can get a sense of what constraints are operating, and partly at ideas that might have a chance of working. I do not guarantee that the latter class of ideas will withstand even five minutes of serious thought: I have already found many approaches promising, only to dismiss them for almost trivial reasons. [Added later: the attempt to write up even the half promising ideas seems to have killed them off. So in the end this post consists entirely of half-baked ideas that I’m pretty sure don’t work. I hope this will lead either to some new and better ideas or to a convincing argument that the approach I am trying to use to create a decomposition cannot work.](more…)

I said in my post about the fourth day of the ICM that if you got the chance to see Simon McBurney’s play A Disappearing Number then you should. Well, I have just learned that it has a short run in London coming up — from today to the 25th of this month. If you open their West End Leaflet you will find some information about the play, and tickets can be booked at this page.

In the comments on EDP18 we are considering a certain decomposition problem that can be understood in its own right. At various points I have asserted that if we can find a decomposition of a particular kind then we will have a positive solution to EDP. And at various points in the past I have even sketched proofs of this. But I think it would be a good idea to do more than merely sketch a proof. So in this post I shall (I hope) give a completely rigorous derivation of EDP from the existence of an appropriate decomposition. (Well, I may be slightly sketchy in places, but only about details where it is obvious that they can be made precise.) I shall also review some material from earlier posts and comments, rather than giving links.

Representing diagonal matrices

First, let me briefly look again at how the ROD (representation of diagonal) approach works. If and are HAPs, I shall write for the matrix such that if and 0 otherwise. The main thing we need to know about is that for every

Suppose now that is a diagonal matrix with diagonal entries and that we can write it as where each and each is a HAP. Then

If and for every then it follows that there exists such that

and from that it follows that there is a HAP such that So if we can make arbitrarily small, then EDP is proved.(more…)

The title of this post is meant to serve various purposes. First and foremost, it is a cheap trick designed to attract attention. Secondly, and relatedly, it is a nod to the amusing events of the last week or so. [Added later: they were from the last week or so when I wrote that sentence.] But there is a third reason that slightly excuses the first two, which is that the current state of play with the EDP project has a very PNP-ish feel to it. Indeed, that has been the case for a while: we are trying to find a clever decomposition of a diagonal matrix, which is a difficult search problem, even though we can be fairly confident that if somebody came up with a good candidate for a decomposition, then checking that it worked would be straightforward. And just in case that is not true, let’s make it trivially true by saying that we are searching for a decomposition that can easily be checked to work. If it can easily be checked to work, then it can easily be checked that it can easily be checked to work (the algorithm being to try to check it and see whether you succeed). But now I want to air a suggestion that reduces the search problem to another one that has similar properties but may be easier.

A brief word also on why I am posting again on EDP despite the fact that we are nowhere near 100 comments on the previous post. The main reason is that, now that the rate of commenting has slowed to a trickle, it is far from clear that the same rules should apply. I think the 100-comment rule was a good sufficient condition for a new post, but now I think I want to add a couple more: if there is something to say and quite a long time has elapsed since the previous post, or if there is something to say that takes a while to explain and is not a direct continuation of the current discussion, then it seems a good idea to have a new post. And both these conditions apply.

[Added later: this is a rather strange post written over a few weeks during which my thoughts on the problem were constantly changing. So everything that I say, particularly early on, should be taken with a pinch of salt as I may contradict it later. One approach to reading the post might be to skim it, read the very end a bit more carefully, and then refer back to the earlier parts if you want to know where various ideas came from.](more…)

The previous post was the final post in the sense of being the last post describing my experience of the ICM. But here I’ll just quickly collect together a few bits of information that it might be handy to have in the same place. I’ll start with links to the recordings of all the talks I have described that were recorded. (You can find these, and all the other talks, by going to the ICM website, but my experience is that they are organized in a rather irritating way: on one page you have a schedule but no links to videos, and on a separate page you have links to lots of videos but are not told which link is to which talk.) Then I’ll collect together my favourite quotes from my four days at the congress. Finally, I’ll give a collection of links. If anyone has any suggestions for possible additions to this page, I’ll be happy to consider them.

Talks discussed on this blog

Opening ceremony Part I (This starts with a close-up of Kevin O’Bryant, includes about 15 minutes before the ceremony started, which allows you to hear, not very well, the Indian music that was going on, and gets up to just before the announcement of the Fields medallists.)

Opening ceremony Part II (This takes you from the announcement of the Fields medals to Martin Grötschel’s amusing discussion of impact factors.)

Opening ceremony Part III (The last ten minutes, starts in the middle of Grötschel’s talk and includes his demonstration of the IMU page with all ICM proceedings on it)

Laudationes Part I (Starts with twenty minutes of empty stage — the result of the laudationes starting late — and gives you all of Furstenberg on Lindenstrauss and the beginning of Arthur on Ngo)

I’ve entitled this post “fourth day” in an attempt to encourage myself to write less and get this account finished: with each passing day I find that more has slipped out of my mind (for instance, there are several hours of this day that I no longer remember anything about), and in any case the fourth day of a nine-day conference that ended last week is hardly hot news any more. Having said that, I have tried the trick with several previous posts in this sequence and been forced to change their titles.

Yet again the organizers gave the first slot of the day to a speaker I couldn’t bear to miss — David Aldous, one of the world’s very top probabilists. So yet again I arrived exhausted at the convention centre. Incidentally, here is a photo (from the second day, as it happens) that shows what arriving at the convention centre looked like. If you look closely you’ll see that there is a dramatic gender imbalance: that is because the “ladies” had been told to go to a different queue. At first I was extremely surprised by this, but there was a simple reason for the segregation: the male queue had a male frisker and the female queue had a female frisker. You can also just make out the airport-like metal-detecting cuboid skeletons we had to walk through on entering the building.(more…)