If, in fact, we can't predict anything beyond the singularity, we're faced with a dilemma: should we allow research into transhuman intelligence? It's possible that TI will be benevolent, so we should strive to create it; but it's also possible that TI will think as little of us as we think of mosquitos, and would feel no qualms about extinguishing us. That wouldn't be bad for the universe, of course, but it would probably be unethical. And if we can't even try to guess, can't even hope to assess the likelihoods involved, on what grounds should we make this crucial decision?

(SF hat-tip: Ken MacLeod, The Cassini Division.)

But--fortunately--there are some reasons to believe that there will be no other qualitative leap than the one which separates us from the other primates. That leap--arguably--is language: not a limited language (chimps have that; probably even housecats have that), but an infinite language: one in which everything that can be said can be said. Any complete language can, in theory, be translated into any other complete language. And it seems likely, for Godelian reasons, that human language is complete. If not, we're back to singularity; but if so, then the difference between human thinking and transhuman thinking must remain merely quantitative: TIs will think like we do, only better.

Anf if there is only one kind of thinking, then there is also only one kind of ethics--that is, only one true ethics. We may expect TI to have a better grasp of that truth than we do. It is certain that respect for persons is part of that truth. And therefore it would not be a severe risk for us to place our welfare under their advice or control.

(SF hat-tip: John C. Wright, The Golden Age.)

If this is the case, then we can predict the future, and the validity of our predictions will tend to correlate with the correctness of our understanding of ethics. Not physics; ethics. Because predicting transhuman technology is easy: anything is possible. So the important question is not what TI will be able to do, but what they will choose to do.

This is, I think, exactly the kind of problem science-fiction is good at. Imagine utopias and try to break them. (SF hat-tip: Martha Soukup, "A Defense of the Social Contracts".) Will any of our guesses be correct?--presumably not. But some of them will probably be on the right track, and we may be able to see which, and why. So singularity is not the only possibility for a transhuman future. And the question may well reduce to a straightforward, though not simple, question about language.

PS. Ken MacLeod is awesome; everyone should all read everything he writes. The Golden Age is terrible. "A Defense of the Social Contracts" doesn't involve TI, but it won a Nebula Award in 1994 and I read it two days ago so it's fresh in my mind.