This meeting also brought together most of the 13 PIs from the NSF Focused Research Grant on Algorithmic Randomness (Kolmogorov complexity) four of which come from computer science (including me) and the rest from computability theory. We have gotten together several times formally and informally because of the grant and we all are friendly with each other but the CS and logic people have not mixed often in their research activities. Why not? We all like Turing machines, complexity has drawn many of its concepts from computability theory and this crowd all study randomness. A few people, Rod Downey for example, have had recent successes in both areas, but why don't we see a larger group of people doing both computability and complexity?

Didn't use to be that way, until the 90's there were quite a few people who were active in both worlds. But while computability and complexity sound related, the techniques used to study these areas are quite different. Despite the similarities, the P versus NP problem is a different animal than computable versus c.e.

As our fields get more specialized it becomes harder to keep up with the latest tools in other fields. And so complexity and computability have drifted apart. And then they develop different cultures making it hard to make the jump between the two. Even understanding the importance of the problems people work on in other fields takes some effort.

Still the FRG grant and these meetings bring some of us from complexity and computability, all children of Turing, talking together and at least helping us appreciate the work each other does.

7 comments:

Seems to me that computability research (or at least this FRG on Algorithmic Randomness) has become "too much" like math in the sense that it has lost touch with reality. One can make that case with some complexity research also, but I think the difference is that complexity research is (usually) at least motivated by questions of practical interest. If you (or anyone else) can make this case for the problems being studied by this grant, I'd be interested to hear it.

So maybe that's why there's not so much interaction between the two fields.

I guess Computability was always part of math, it has not changed. This is the Complexity that has changed over the decades, by moving away from techniques in recursion theory, like diagnalization, probably because people felt that they are not going to help in solving P vs NP as Lance said.

The field-splitting that Lance describes is only a moderate issue for theorem-provers, but it poses a severe problem for systems engineering.

When Lance says of the conference attendees: "We all like Turing machines, complexity has drawn many of its concepts from computability theory and this crowd all study randomness", this amounts to saying that the folks at the conference have reasonably overlapping notions of naturality.

But increasingly nowadays, systems engineers find that they sometimes don't even share even the most basic notions of naturality ... a lacuna that makes communication infeasible.

These naturality-splits start early in the undergraduate education ... a student who learns from Schey's textbook Div, Grad, Curl and All That will ever-after have a different notion of naturality from a student who learns from Bill Burke's (semi-legendary) Div, Grad, and Curl Are Dead.

So, should colleges set undergraduate standards for mathematical naturality ... and if so, what should those standards be? ... or is that choice best left to the individual students/professors?

If yah want to start an argument, just ask that question!

The Introduction to Bill Burke's book is a lively polemic about field-splitting and mathematical naturality ... it is well-worth reading—and contemplating—by any undergraduate student IMHO.