Wednesday, March 21, 2012

More on Kurzweil's Predictions

Ray then emailed me, thanking me for my defense of his predictions, but questioning my criticism of his penchant for focusing on precise predictions about future technology.
I'm copying my reply to Ray here, as it may be of general interest...

Hi Ray,

I wrote that blog post in a hurry and in hindsight wish I had framed things more carefully there.... But of course, it was just a personal blog post not a journalistic article, and in that context a bit of sloppiness is OK I guess...

Whether YOU should emphasize precise predictions less is a complex question, and I don't have a clear idea about that. As a maverick myself, I don't like telling others what to do! You're passionate about predictions and pretty good at making them, so maybe making predictions is what you should do ;-) .... And you've been wonderfully successful at publicizing the Singularity idea, so obviously there's something major that's right about your approach, in terms of appealing to the mass human psyche.

I do have a clear feeling that the making of temporally precise predictions should play a smaller role in discussion of the Singularity than it now does. But this outcome might be better achieved via the emergence of additional, vocal Singularity pundits alongside you, with approaches complementing your prediction-based approach -- rather than via you toning down your emphasis on precise prediction, which after all is what comes naturally to you...

One thing that worries me about your precise predictions is that in some cases they may serve to slow progress down. For example, you predict human-level AGI around 2029 -- and to the extent that your views are influential, this may dissuade investors from funding AGI projects now ... because it seems too far away! Whereas if potential AGI investors more fully embraced the uncertainty in the timeline to human-level AGI, they might be more eager for current investment.

Thinking more about the nature of your predictions ... one thing that these discussions of your predictive accuracy highlights is that the assessment of partial fulfillment of a prediction is extremely qualitative. For instance, consider a prediction like “The majority of text is created using continuous speech recognition.” You rate this as partially correct, because of voice recognition on smartphones. Alex Knapp rates this as "not even close." But really -- what percentage of text do you think is created using continuous speech recognition, right now? If we count on a per character basis, I'm sure it's well below 1%. So on a mathematical basis, it's hard to justify "1%" as a partially correct estimate of ">50%. Yet in some sense, your prediction *is* qualitatively partially correct. If the prediction had been "Significant subsets of text production will be conducted using continuous speech recognition", then the prediction would have to be judged valid or almost valid.

One problem with counting partial fulfillment of predictions, and not specifying the criteria for partial fulfillment, is that assessment of predictive accuracy then becomes very theory-dependent. Your assessment of your accuracy is driven by your theoretical view, and Alex Knapp's is driven by his own theoretical view.

Another problem with partial fulfillment is that the criteria for it, are usually determined *after the fact*. To the extent that one is attempting scientific prediction rather than qualitative, evocative prediction, it would be better to rigorously specify the criteria for partial fulfillment, at least to some degree, in advance, along with the predictions.

So all in all, if one allows partial fulfillment, then precise predictions become not much different from highly imprecise, explicitly hand-wavy predictions. Once one allows partial matching via criteria defined subjectively on the fly, “The majority of text will be created using continuous speech recognition in 2009” becomes not that different from just saying something qualitative like "In the next decade or so, continuous speech recognition will become a lot more prevalent." So precise predictions with undefined partial matching, are basically just a precise-looking way of making rough qualitative predictions ;)

If one wishes to avoid this problem, my suggestion is to explicitly supply more precise criteria for partial fulfillment along with each prediction. Of course this shouldn't be done in the body of a book, because it would make the book boring. But it could be offered in endnotes or online supplementary material. Obviously this would not eliminate the theory-dependence of partial fulfillment assessment -- but it might diminish it considerably.

For example the prediction “The majority of text is created using continuous speech recognition.” could have been accompanied with information such as "I will consider this prediction strongly partially validated if, for example, more than 25% of the text produced in some population comprising more than 25% of people is produced by continuous speech recognition; or if more than 25% of text in some socially important text production domain is produced by continuous speech recognition." This would make assessment of the prediction's partial match to current reality a lot easier.

I'm very clear on the value of qualitative predictions like "In the next decade or so, continuous speech recognition will become a lot more prevalent." I'm much less clear on the value of trying to make predictions more precisely than this. But maybe most of your readers actually, implicitly interpret your precise predictions as qualitative predictions... in which case the precise/qualitative distinction is largely stylistic rather than substantive

7 comments:

Yes, greater precision would lead to greater seriousness of the singularity meme, if the predictions were quantitatively true.

What makes kurzweil so popular isn't his accuracy but his boldness. No one really tries to predict the future. Well, besides charlatans. Ray isn't crazy, anybody can see that. But is he right? It's hard to say right now. His lack of rigor is the hardest part to swallow.

If he replies that he wants to be less qualitative an sharpen his style, that would be great news. Even if he winds up being mostly wrong.

One thing I find interesting and frustrating is the way that the majority of non-singularitarian critics use the ambiguity (or in some cases the outright failure) of the specific predictions as a way to dismiss the broad sweep of the argument. The core of the message seems to me twofold: one part is about exponentially accelerating progress across an increasingly broad swath of the human project,both culturally and technologically; the other is about the emergence of AGI. Both trends appear incandescently obvious from my perspective, and it's annoying that the distraction of arguing about the specifics of the predictions obscures wider recognition of this evident fact. Ray could probably ameliorate this phenomenon somewhat by giving broader windows within which his predictions will occur, instead of exact years. It seems plausible that there is a widespread resistance to accepting the implications of the core argument, and pouncing on inaccurate predictions gives people one way to feel righteous about their incredulity when in fact they're veering away from the salient point. Of course, that's one reason TSIN is such a long book: Ray addresses the cultural mechanisms behind resistance to his ideas at great length, but they are complex, and not amenable to summary every time someone attacks his prediction record as an easy way to stir things up.

I really love the template you are using. It's really a cool template, very minimalist and loading is also very fast. I really liked the look of the first template like this. Success always for you. contoh advertisement text | cara menata ruang tamu | <a href='http://www.kamulah.com/contoh-makalah-strategi-pemasaran/>contoh makalah strategi pemasaran</a>