Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch

Thursday, September 06, 2012

The Time of Research

Among the many conference/journal discussions, one systems person said the reason they don't publish in journals is that their work is very dependent on current technology. A few years in the future the technology will change and this research is no long relevant. The best systems research develop ideas that transcends technology, but much research in systems have a limited lifespan of relevancy.

In theory, if you prove a theorem that theorem will be true forever. In fact that theorem was always true, we just didn't know it. So it is important to have refereed long-term archival write ups of these results. A theorem could be supplanted by another result or get less interesting over time but it always remains true. Theory results transcend technology, though most theory result have a zero lifespan of relevancy.

9 comments:

Some journals (including at least one ACM journal) disallow any submission that contains substantial content that has previously appeared, e.g., in a conference paper. Some (including the mentioned ACM journal) even disallow submissions for which the main result or conclusion has appeared in a press release. To me this seems completely opposed to the purpose of a refereed archival journal. It places the author(s) in the position of having to choose between timely dissemination and archival status. I wonder at this point in time whether the latter is even meaningful.

I wonder if the systems person is right. At a minimum, well-written journal papers allow future researchers to explore the history of ideas in systems research, just as I had to read about the first stack machine for my architecture class. But even more, it seems to me (from my very limited perspective on systems) that systems returns to some of the same sets of ideas again and again (just as computing power has shifted from servers to clients and back towards servers in my lifetime, and the ability to handle all the data in memory was possible only during a brief interval), only with different types of tradeoffs possible each time. It might be useful to have these ideas well-documented rather than require future generations to rediscover them.

Your comparison between systems and theory seems a bit one-sided. Sure, a theorem is always true, but can become irrelevant. (If you prove a consequence of some assumed separation, and the separation is later shown to be false, the first result is no longer interesting.) But in systems, a system that works at one point in time will work forever, though it may only run on outdated hardware or solve a problem that no longer exists. What is the difference, exactly?

Let T be a theorem. Let t_0 be the time of first publication of T. For any t > t_0, let f(t;T) be the no. of new theorems that were proved with the help of T in the time interval (t_0, t]. For sufficiently large t (say t - t_0 = 50 years), f(t;T) is a measure of how productive T has been to mathematicians.

Let k be the average of f(t;T) over all theorems that have appeared in FOCS/STOC. Is k > 1?

Some journals (including at least one ACM journal) disallow any submission that contains substantial content that has previously appeared, e.g., in a conference paper. Some (including the mentioned ACM journal) even disallow submissions for which the main result or conclusion has appeared in a press release.

This isn't true of any publications where theory papers appear as far as I know. What journal is it?

The difference is not that theory results remain true while systems results become false. The difference is that in theory most people don't care much about the relevance of their results to applications and industry so in one sense most theory results are irrelevant even at the time they were discovered and remain so. On the other hand system people care about applications in the industry a lot, their results are expected to be relevant at the point of their discovery but can become irrelevant with later advances.

I think this depends crucially on how one defines relevancy. Previous experience in mathematics graduate school exposed me to an ongoing debate raging between purists and applied scientists. More extreme elements in either camp refused to validate the other, a rather destructive perspective.

G.H. Hardy in A Mathematician's Apology proudly boasted that prime number theory, a favorite of his, bore absolutely no application. Yet now it provides a foundation for modern cryptography. Archimedes didn't dream up the exhaustion principle for the sake of some application; rather, he enjoyed solving problems. Yet this limiting procedure underwrites calculus.

Hardy (and likely Archimedes would have) agreed that pure mathematics holds timeless appeal in part because it is enjoyable. Whether your threshold of enjoyment is tic-tac-toe or computing free resolutions, you're playing with the purists.

I feel that a healthy perspective is to recognize that pure and applied mathematics are complementary, and that both are essential for the progress of civilization.