Saturday, June 12, 2010

Stephen Wolfram (bio) has a very unhelpful habit of describing disciplines in terms of the spaces of possible knowledge within them. So he asks about whether we can say that art has progressed throughout history, and says it's not clear that it has, unless you note that the space of possible works of art has been progressively more filled in. This a remarkably stupid thing to say, though. Aside from evincing zero understanding of the purpose of art, it also misunderstands how "spaces" work. You could define a "space" of, say, all possible 1000x1000-pixel JPG images and talk about those. But whether you define works of art as physical or conceptual objects, the possible space is not only infinite, but indefinite. A really useless way of thinking about anything related to the subject.

I note this mostly because this example is emblematic of the approach Wolfram takes throughout his long presentation. He keeps talking about the space of possible programs, and what possible programs out there might have intelligent properties and whether we can find those. Now, unlike possible works of art, the space of possible programs actually is well-defined, to the extent that one can actually devise systems for enumerating each possible program. Even so, although this gives an interesting image of walking into a forest full of all possible programs searching for the ones that are alive and conscious, it's not a very useful way to think about intelligence since it tells us nothing about its salient features. Imagine applying it to something radically more simple, like writing a program to do your taxes. How is thinking about "the space of possible programs" remotely useful to that task?

I'm willing to bet that a lot of people will mistake Wolfram's talk for a very high-level talk when in fact it's ridiculously too low-level; if it were delivered as a lecture at a computer-science conference, it would be obvious to everyone in the room just how unserious it is. Moreover, Wolfram's talk is utterly detached from human concerns. There's no consideration at all of moral questions, just of fields of technical possibility. It's as if he's worried that he might sound like a member of the human race himself. This makes it particularly ironic when he comes back around at the end of his talk to saying that his systems approach will eventually determine all human meaning.

The organizers really should have reined in this talk. Where the others have been too short to get anything done, this one, at fifty minutes — the length of five normal slots — was unfocused and uninteresting.