Posted
by
Soulskill
on Monday November 29, 2010 @06:20PM
from the headlines-that-sound-like-pornos dept.

wjousts writes "Well-known futurist Ray Kurzweil has made many predictions about the future in his books The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999) and The Singularity is Near (2005), but how well have his predictions held up now that we live 'in the future'? IEEE Spectrum has a piece questioning the Kurzweil's (self proclaimed) accuracy. Quoting: 'Therein lie the frustrations of Kurzweil's brand of tech punditry. On close examination, his clearest and most successful predictions often lack originality or profundity. And most of his predictions come with so many loopholes that they border on the unfalsifiable. Yet he continues to be taken seriously enough as an oracle of technology to command very impressive speaker fees at pricey conferences, to author best-selling books, and to have cofounded Singularity University, where executives and others are paying quite handsomely to learn how to plan for the not-too-distant day when those disappearing computers will make humans both obsolete and immortal.'"

Greetings, my friend. We are all interested in the future, for that is where you and I are going to spend the rest of our lives. And remember my friend, future events such as these will affect you in the future. You are interested in the unknown... the mysterious. The unexplainable. That is why you are here. And now, for the first time, we are bringing to you, the full story of what happened on that fateful day. We are bringing you all the evidence, based only on the secret testimony, of the miserable souls, who survived this terrifying ordeal. The incidents, the places. My friend, we cannot keep this a secret any longer. Let us punish the guilty. Let us reward the innocent. My friend, can your heart stand the shocking facts of grave robbers from outer space?

He's made some pretty dubious claims about the present, too, like the whole thing about the human genome being compressible to as little as 50 Mb, about an order of magnitude better than anyone has managed without cheating (e.g. by just compressing the diff to the reference sequence, or ignoring non-coding sequences). Publish the algorithm!

Yeah, but a human has ~3 billion base pairs. IANACS, but with 2 bits per base, so one byte represents 4 bases, so it's roughly equivalent to 750 megabytes. That's pretty impressive compression to shrink to 50 megs (which I agree, is a lot of data).

Then again, if you skim the "junk DNA" (which may or may not really be junk), you can shrink it quite a bit. OTOH, this does not account for the epigenome though, which is bound to pack on quite a few megabytes itself.

As a disclaimer, I have no knowledge of genetics, however I do know a thing or two about data representation because we've had to use it as part of our research in facial recognition. There are techniques of compression that are quite extraordinary. An example is Wavelets, a Code Book (Bag of Words), PCA ect. How much you can compress the genomic data depends on its statistics. I.e. distributions, patterns, ect., and how much precision you are willing to lose. If you represent an image as simply color value

Of course, when pointing out the flaws in someone else's claims about the future, it helps to get your claims about the present correct. For example, stacked chips may not be quite as common as he suggests, but they're still fairly ubiquitous. Nearly every microSD card uses a stacked-chip design, for example, as do many full-sized SD cards. So do the CPUs used in the iPhone, the iPad, and many other phones. We're only just getting started too... there are plausible rumours AMD are considering stacked chips

I think the general trends to predictions about future technology is that optimistic predictions often winds up being wrong (which isn't too say that overly cautious predictions are any better - like Bill Gate's 637 kb of memory claim).

I'm still waiting for my ticket to the moon from Pan Am to be a reality, 9 years after 2001, and 48 years after 1968.

For the 50 millionth time, Bill Gates didn't make any such claim about 637K, 640K or whatever. The memory limit in MS-DOS was dictated by the CPU, the 8086 made by Intel, and chosen by IBM for the IBM PC. Sorry to be off topic but I get sick of people slandering this guy, who would never say a bad word about IBM and Intel for doing exactly what they accuse Bill Gates of, because of their support of Linux and Apple.

I can predict the future of the Windows Phone and of Steve Balmer. Fail + Fail = New M$ CEO for January! I remember when the Zune was going to kill the iPod, and the Kin was going to do something I can't remember now, and Slate, and Vista... need we remind you further?

I can predict the future of the Windows Phone and of Steve Balmer. Fail + Fail = New M$ CEO for January! I remember when the Zune was going to kill the iPod, and the Kin was going to do something I can't remember now, and Slate, and Vista... need we remind you further?

You can't predict the future by remembering the past. History is just the shackles of the mind. What we need are some forward thinkers who are willing to make the same mistakes over and over again. I call them 'American Voters'. We think we know what we're doing and we act like we know what we're doing, but every two years we don't seem to get anywhere. Which is OK because the present is where it's at. What did the future ever do for us anyway?

FWIW:I've read the denial, and I've several times read the debunking that he made the claim.

I'm not convinced.

OTOH, at the time he made the claim it was basically true. He was being asked about the design of (IIRC) MSDOS, and people were saying that it would get in the way of expanding RAM. Then (at a time when the average RAM was around 16K) he said "640KB should be enough for anyone". He wasn't being unreasonable, or short-sighted (no matter how it looks now). He was being practical. And he was basic

Observe how the "futurists" of the 60s focused on the automobile and such, while basically didn't see the mobile phone or the equivalent of the internet.

Of course, Bob Heinlein had his characters using mobile phones in the 50's and 60's. Between Planets opened with the main character receiving a phone call while riding a horse in the back end of nowhere. Space Cadet had the main character receiving a phone call while standing in line for processing into the Patrol, while another character mentioned leaving his phone in his luggage so his mother couldn't worry at him...

Closest to the internet I can recall was Asimov's "The Last Question", which had characters connected (various input/output methods, from voice to direct neural feed) to world- (and later galaxy- and universe-) wide computer systems.

Asimov... Generally he foresaw one big computer. There's even an intro he wrote for a short story compilation in which he talks about it, from the perspective of 20 years or so after writing.

He says "Basically I didn't see miniaturisation coming, so I missed out on computers becoming small or ubiquitous". So he thought of computers occupying whole cities, planets or even systems. I *think* that's the situation in the story you mention too. One huge computer.

Hit up wikipedia on the 8086 processor and you'll see where the 640k limitation came from. Further reading would inform you that the reason the limitation lasted so long was because of Intel's backwards compatibility policies (a good thing, but poorly planned in that particular respect).

To be fair, Kruzweil isn't that dumb. He's not suggesting that mereley doing the same thing, on a much faster computer, suddenly magically turns into a different thing. Infact the opposite is likely to be true: throwing more power at a problem tends to yield diminishing returns.

But one of the things we use our tools for, is to make better tools. One of the tasks where computers currently help out, is with building better computers. And one of the tasks where software-tools help, is in making better software

A) it is assuming that we will always have a technological breakthrough at the right moment to allow the doubling of computing power every 18 months. Maybe this is the case, but it's still a big assumption.

B) He assumes if we put enough cyber neurons together in a neural net you will develop intelligence and conscience. This may be the case, and it will be interesting to see, but I don't think you can take it for granted. He also spent about 2 pages in his book about this from a philosophical perspective, basically a: "Here is what three people thought about consciousness. Anyway, moving on..." Seems like it should be a central point.

C) I think he also assumes that having such massive massive amounts of computing power will solve all our problems. Has he heard of exponential-time problems, or NP-Completeness? Doubling computing power every 18 months equates to adding one city to a traveling salesman problem every 18 months.

A) It's not that big of an assumption. The exponential curve in computing power doesn't just go back to the advent of computers, it goes back as far as we could perform simple arithmetic. It's an assumption based on our long history of improving methods and fabricating machines to compute. Unless we have capped our ability to invent new methods of computing, it's a fairly safe assumption to make. Our ability to compute is probably not limited by the number of transistors we can pack on a silicon disk.

B) given a large enough knowledge base and a set of really good AI algorithms, one should be able to create intelligent machines. There's nothing to prevent them from replicating, either. However, I don't think that they will ever be truly sentient. Even so, careful design will be necessary to ensure Asimov's laws of robotics are strictly enforced.

C) I don't believe Kurzweil has ever claimed NP-Hard problems would be solved by the exponential increase in computing power.

B) If you don't think that machines can ever be "sentient" but you do believe that biological organisms can be, then you must explain what magic is happening in biology which can not be replicated in other media.

Also, if you could explain at exactly which level of biological intelligence "sentience" emerges. I'll assume you would claim humans as sentient. Is that all humans? How about apes? Monkeys? All mammals? All vertebrates? Maybe if we can determine who is sentient and who isn't, we can study the diffe

B) given a large enough knowledge base and a set of really good AI algorithms, one should be able to create intelligent machines. There's nothing to prevent them from replicating, either. However, I don't think that they will ever be truly sentient. Even so, careful design will be necessary to ensure Asimov's laws of robotics are strictly enforced.

Asimov's Laws of Robotics deal primarily with social realities. E.g., "A robot may not injure a human being . .." -- Human being does that include a Jew? a capitalist running dog? a fertilized human ovum? Terri Schiavo? The humanity of each of these has been called into question in one social context or other. Try making a formalized specification of what a human being is.

Read the laws carefully and you'll see a significant number of other terms that are difficult to define. Asimov explores some of the

A) it is assuming that we will always have a technological breakthrough at the right moment to allow the doubling of computing power every 18 months. Maybe this is the case, but it's still a big assumption.

Intel and AMD are both doubling the width of their SIMD capabilities with AVX in the next year. This is simply a design decision, not a breakthrough. More cores is also a design decision, not a breakthrough.

When the first vector processors hit super-computing, it became plainly obvious that computational capacity could always be doubled.

Remember that capacity is not velocity, or in more geeky terms.. MIPS is not MHz.. bandwidth is not latency...

There hasnt been a breakthrough in many years now, yet computational capacity continues to grow exponentially.

When the first vector processors hit super-computing, it became plainly obvious that computational capacity could always be doubled.

Always? We can't make much progress without a breakthrough in efficiency. My gaming PC needs a 1 kW power supply (and 11 fans). Double that and I'll trip my breaker. Double that again and it's past what's safe for home wiring. Double that again and you're past what's safe for normal commercial wiring, and you really need something special purpose (beyond 30 A @ 240V). Give it a decade without an efficiency breakthrough and we're talking "space age" SciFi computers that filled buildings (with attached atomic power station).

Any there's only so much that can be done on the efficiency front. Beyond a certain point, addional parallelism mandates additional latency, because you need physical volume for cooling and therefore separation of components, so you're really talking about adding more computers to a network, and not the power of individual computers.

We already have a network of computers that exceeds the computing power of the human brain, IMO. What makes the human brain so amazing is what it can do with ~100 W of power. That kind of efficiency gain is not a given.

The pig go. Go is to the fountain. The pig put foot. Grunt. Foot in what? ketchup. The dove fly. Fly is in sky. The dove drop something. The something on the pig. The pig disgusting. The pig rattle. Rattle with dove. The dove angry. The pig leave. The dove produce. Produce is chicken wing. With wing bark. No Quack.

A) it is assuming that we will always have a technological breakthrough at the right moment to allow the doubling of computing power every 18 months. Maybe this is the case, but it's still a big assumption.

That is mainly a question of timing. The main point being that we will have computational power in a relatively near (~50 years) future to make a computer which has computational capabilities exceeding that of the human brain.

B) He assumes if we put enough cyber neurons together in a neural net you will develop intelligence and conscience. This may be the case, and it will be interesting to see, but I don't think you can take it for granted.

I believe that you can. If you simulate the processes of the brain the simulation will act as a brain.

C) I think he also assumes that having such massive massive amounts of computing power will solve all our problems. Has he heard of exponential-time problems, or NP-Completeness

I don't believe he assumes that. But it would of course solve a lot of our problems. And create a lot of new problems.

I think you are misunderstanding both the nature and the purpose of his predictions.

You didn't note that they are essentially unfalsifiable. You should have. If you had, you would have noticed that your first complaint was wrong. They are unfalsifiable for the same reason that the "predictions" of Toffler's "Future Shock" were unfalsifiable. They are a description of potentials, not of things that will happen, but of things that *may* happen.

It's not a question of cheating. Those algorithms are simply approximate. They can't be guaranteed to get the optimal solution, but only to get a solution that is within some factor as good as the optimal... Or sometimes give no guarantees at all (e.g.: genetic algorithms). Those are often the solutions used in practice for NP-complete problems, because they're fast and will often get you very very close to the optimal solution. So close that you don't really care it isn't guaranteed optimal. Methods such a

John Rennie is just pissed that he can't command such nice speaking fees.

I was thinking the same thing after reading the article. Jealous much, Mr. Rennie?

To those who didn't bother to RTFA, John Rennie was the editor-in-chief for Scientific American from 1994 to 2009. You know, the guy who took a formerly great science periodical and ran it into the ground by turning it into a magazine full of puff-piece op-eds masquerading as science articles.

The point isn't to be accurate; it's to be engaging. We live in an age in which it is more important to entertain than to inform. Look at all the hack prognosticators in the business and technology press who make a living making predictions – most of them are wildly off the mark but nobody cares enough to go back and call them on their failures.

The point isn't to be accurate; it's to be engaging... nobody cares enough to go back and call them on their failures.

And thus we have the modern press/news regime. No need to actually report correct information. Just report what is entertaining whether it's true or not and certainly don't waste any time trying to determine the truth of anything.

True, but I'd go further. Part of true genius is not being afraid of being wrong. A very intelligent person isn't necessarily a genius, but take that person and have him lavish his time and effort on something others think is a crock, and if he succeeds he's a genius.

So what happens when a recognized genius becomes, in effect, a *professional* genius? Even genius has its gradations. Not every genius can be a Mozart, an Einstein or a Ramanujian. Such individuals are in a different class. They needn't wo

I used to disdain all these vague futurists. in many cases, it's sure to happen in the far distant future, and after the fact a few act smart enough to have said it long before. And many times it doesn't happen close to the way that's predicted. I always tended toward the practical side of things, rather than the theoretical.

But one thing after another after another that was obvious and predictable just by applying Moore's law, still surprised almost everyone when they became reality. Things like lots of movies on a tiny chip.

I was a singlularity denier, for one thing. But I have to reverse myself and admit that I'm wrong. Oddly, it was Ray, presenting to an audience in Vienna, which convinced me otherwise. The only thing about being a singularity futurist is that you've predicted what's already happened. Try living without today's technology and internet and see how far you get. It's already unclear to what extent the creators (ourselves) or that which we have created (technology) is the master. We always thought that we could turn off unfriendly robots, but we can't really turn off the internet, which is the largest robot yet (and the one that replaces most human brains for getting the best answers to things).

Ray takes a lot of flak but he deserves respect, even when you think he's wrong.

I have no problem with his synths - they sound great. I've used them in a lot of my music. I just wish he'd get back to doing what he's good at - making interesting and useful things. Note that I don't believe that crackpottery about the future is a particularly useful thing.

I'm sure he did, just as he predicted everything and everyone that did and didn't happen. He even predicted the master, Bruce Lee.

At any rate, conning a bunch of execs into a pointless training is hardly worthy of note. Not even if you get them to paint their asses blue and run around naked in the forest. As a group, or one at a time, they aren't that bright and it isn't their money.

People like Kurzeil are a service to the industry. All those self-styled experts blabbering infantile gibberish about

I can't imagine computers will make humans obsolete. There's one thing about us humans and that is that we are quite psychopathic when it comes to exploiting our environment and dominating every other living thing. And we don't clear up our waste properly, either. I think that when the time comes, and the regular PC is an uber conscious super intellectual being, the computers of this world will just up-sticks and bugger off to some other planet. Like Mars, where with a a few solar panels and a bit of ingenu

I think that when we have uber-intelligent computers that they'll basically just be a part of us, rather than some separate entity in competition with us. We'll "evolve" (well, engineer ourselves) to include artificial parts to do what the meat doesn't do well, and the tech will "evolve" to rely on the meat for the stuff the metal can't yet handle.

At some point in the future, it wouldn't surprise me if we did find a way to do away with the meat all together and that some meatless "humans" buggered off, but

We have discussed this many times. I debated writing out a lengthy post espousing the many problems with Kurzweil's predictions. Of course I (and Slashdot stories) have done this [slashdot.org] before [slashdot.org]. But you know after reading this article, I have this sort of urge to read more of Kurzweil's writings in an attempt to develop an equivalent process for identifying something we could call "Technological Stock Spiel." To some of you Sagan nuts and skeptics, you might recognize the phrase "stock spiel" as something used to designate parlor tricks and underhanded wording to get people to believe that you're a psychic. It's also been called cold reading strategy [freeonline...papers.com] and you've seen shows from Family Guy to South Park parody it.

Basically I suspect that Kurzweil is adept at standing up in front of a group of people and employing this same sort of strategy that preys on people's understanding of technology instead of their emotions. But both of those things have in common the fact that people want to believe great things. If he's talking to computer scientists, he'll extrapolate on biology. If he's talking to biologists he'll extrapolate on computer science and so on and so forth. And he probably knows exactly what to say so that more than enough people gobble that up. Because of the things that I have studied extensively through college, this man is very capable of talking like he knows just enough and using vague analogies to get people going "Yup, yeah, uh huh I see now, I want to believe!"

On close examination, his clearest and most successful predictions often lack originality or profundity. . And most of his predictions come with so many loopholes that they border on the unfalsifiable. Yet he continues to be taken seriously enough as an oracle of technology...

Oh where have I heard that description before.... oh ya, here [wikipedia.org]

People in 2110 will be looking at copies of the Scientific American from 2010 that have Ray Kurzweil in them talking about a Singlularity and saying they want it. They'll also be wanting their flying cars, AI, and fusion power which the singularity was supposed to give them.

Self driving cars on the Highway are on the way, if the pun is excused. There are quite a lot of experiments and development. There is an EU program, etc. Sure, to get them on the roads (and integrate their systems with highways etc) will certainly take at least another decade.

The point is, the subject is not a joke, as the article insinuated.

That said, I'd not trust Kurzweil's claims on e.g. economics or cancer research. I might give some credibility to experts in those areas.

Self driving cars on the Highway are on the way, if the pun is excused. There are quite a lot of experiments and development. There is an EU program, etc. Sure, to get them on the roads (and integrate their systems with highways etc) will certainly take at least another decade.

I predict that self-driving cars will be in widespread use on public roads about a year after flying cars are available in your local Ford dealer.

Nope - if you have "commuter lanes" or some other restricted lanes on your local highways, you'll see it's not a stretch to have those be dedicated to self-driving cars before much longer. The technology is nearly here. The infrastructure (always the hard part) is already here.

Another topic that's an excuse to hate on Kurzweil. I'm really looking forward to a bunch of depressing, bitter pessimists babbling about how the future is impossible and if men were meant to fly God would have given them wings. So the man's a little nutty, is that really why so many hate him? I think it's jealousy.

So, here's my problem: apparently I shouldn't "hate on" Ray Kurzweil, "hating on" is a bad thing. But I do hate Ray Kurzweil, not personally mind you, I'm sure he's an excellent individual, but in the same way that I hate any useless person in the public eye who makes their living peddling bullshit.

I suppose I could be jealous, though I'm not exactly sure what I would be jealous of. I assume he's relatively well-off, but there are plenty of rich people I have no problem with; is it his ability to set a

No, we dislike his nuttery because it moves attention from the achievable to the non-achievable. In addition, he makes it sound to many powerful people who have control of funding projects that what he espouses is inevitable, giving them cover to defund projects that may actually benefit mankind because, if the singularity is around the corner, why should they fund anything... In short, Ray is a crackpot who does more harm than good, sort of like a fundamentalist preac

I'm all for criticizing the excesses of Kurzweil, but I don't think the article is up to snuff and reads like a personal attack on Kurzweil rather than a well-reasoned refutation of Kurzweil's predictions.The author seems to take the position that Kurzweil wasn't exactly 100% accurate in all the factes of his predictions, therefore he was wrong and besides, somebody else already thought of it anyway before Kurzweil did. It's kind of a specious hit piece that cherry picks a couple of examples and doesn't really measure up as a serious analysis of Kurzweil's record. Maybe it would be nice of someone actually did that, but this article is nowhere near it.

Futurists don't "predict the future". They discuss the past and present, talk about its implications, and get people in the present to think about the implications of what they do. They talk about possible futures. Which of course changes what actually happens in the future. They typically talk about a future beyond the timeframe that's also in the future but in which their audience can actually do something. Effectively they're just leading a brainstorming session about the present.

This practice is much like science fiction (at least, the vast majority, which is set in "the future" when it's written), which doesn't really talk about the future, but rather about the present. You can see from nearly all past science fiction that it was "wrong" about its future, now that we're living in it, though with some notable exceptions. In fact "futurists" are so little different from "science fiction writers" that they are really just two different names for the same practice for two different audiences. Futurism is also not necessarily delivered in writing (eg. lectures), and is usually consumed by business or government audiences. Those audiences pay for a product they don't want to consider "fiction", but it's only the style that makes it "nonfiction".

This practice is valuable beyond entertainment. Because there is very little thinking by government, business, or even just anyone about the consequences of their work and developments beyond the next financial quarter. Just thinking about the future at all, especially in terms that aren't the driest and narrowest statistical projections, or beyond their own specific careers, is extremely rare among people. If we did it a lot more we'd be better at it. But we don't, so "inaccurate" is a lot more valuable than "totally lacking". Without futurism, or its even less accurate and narrower form in science fiction, the future would take us by surprise even more. And then we'd always suffer from "future shock", even more than we do now.

If we don't learn from futurism that it's not reliable, but still valuable, then it's not the fault of futurists. It's our fault for having unreasonable expectations, and failing to see beyond them to actual value.

That was the most long-winded way of saying "public masturbation" that I've ever seen.

Sorry, but I have no respect for anyone describing themselves as a "futurist"; or as someone who's out to "get people to think" for that matter - people do that on their own, when you yourself present something thoughtful.

And the only reason anyone mentions Kurzweil's lack of (meaningful) accuracy is his constant self-congratulation on how accurate he is - no one cares otherwise.

Even just the next few years when their adjustable rate mortgage jumps to over 10%. Or when having a half dozen drinks when they're driving themselves home in a couple of hours. Or when they change lanes without looking. Or when developing a product, apart from (possibly) the immediate revenue in the next quarter or two. Thinking about the future is very rare.

Really, where do you live and work, where they think about the future in more than the

Our joke about Kurzweil was he was someone who didn't take his "series expansion" to enough terms.. What he does is look at emergent phenomena and notice the exponential growth curve.. (which occurs in a variety of phenomena from biology to physics to even economics).. and from that draw the conclusion that everything (or particular aspects of technology really) will continue to grow exponentially ad infinitum.. to a "singularity" etc.. This is both intuitively not true and factually not true because of resource / energetic issues (however one wants to define it for your particular problem).. The point is you can actually look at the same phenomenon that Kurzweil claims to and notice in fact actually new phenomena/technology/etc only initially look "exponential" and then for all the obvious reasons flatten out (again really only initially (but further down the time curve than the exponential growth phase)) so your curve in the end looks really like a sigmoidal function.. (given whatever metric you choose) The hard part is to figure out how quickly you'll hit the new pseudo steady state.. but its certainly absurd to assume it never happens.. which is what the absurd conclusions he draws are always based on..

Assessing Kurzweil is a good yardstick for whether a person is capable of deep thinking. He's one of the slipperiest grease poles around. Yet sadly, he's usually miles ahead of the criticisms put forward.

This article is not much of an exception. Kurzweil defines common as a few percent, the lower knee of the adoption S curve. If you think habitually in exponential terms, one percent is common. What is one percent when the cost of genetic sequencing decreased by five orders of magnitude over one decade?

Oh come off it, we all know it's just the Fermi-estimations of a man thinking out loud. It's not science, it is, as the summary says, punditry. Give over "exposing" it, we all know it's very, very far from rigorous or even (gasp) godlike. It exposes itself, we all know it's rubbish.

But the ideas are a good enough conversation starter, and it's a possibly important idea to be talking about. So who wants to accept that Kurzweil isn't science and discuss the idea of the technological singularity instead?

The people in charge won't let that happen. It would change everything, and that will harm their profits. See how the potential of internet, of free interchange of information, all together pushing knowledge and mankind forward got badly crippled by copyrights, patents, lawsuits and so on. Getting to true AI is worse than just risking lives, it put business at risks, specially if another one have it, so all the ideas that could push forward in that direction are patented, taken, and not being able to be use

I have a problem much before that. He assumes Sony will last long enough to make the Playstation 6. He also assumes Sony will make a Playstation after the PS3. I predict Apple will simply buy Sony. There, it's easy to predict things, isn't it?

Nobody would have predicted Sonic games on Nintendo hardware 20 years ago.

He may be a hack, perhaps, but just because current computers don't work the same way as the human brain doesn't mean future ones might. Even if they do not, yet are still able to do everything a human brain can, would it not be fair to say that they can match a human brain in terms of computational capacity?

We are often chided for comparing apples to oranges, for example, but both can both evaluated in terms of their water content, chemical composition, structure. Or to put it another way, one might compar

No, it really isn't. Biological evolution has never had any need for a Turing machine. The Turing machine, however, came into being only hundreds of thousands of years after the human brain invented symbols. Symbols are sometimes a great way to understand things, but most people understand that a symbol isn't identical to its object. To a Turing machine, however, such a difference doesn't exist, as it has only symbols and no object at all.

Why isn't there an equal skepticism about Space Nuttery like Moon colonies, space-based solar power and asteroid mining? They are equally delusional.

No they're not, and there was plenty of skepticism about such claims when O'Neill in the 70s was proclaiming that we could be doing them all in a few years, because it was clearly technologically impossible with any reasonably justifiable amount of money. There's far less skepticism today because we can see that they could be viable in a few decades.

Similarly, I haven't seen too much wrong with Kurzweil's claims, other than that he expects things to happen within the next few years, rather than the next few decades (or centuries if you're pessimistic).

I believe Clarke once said something along the lines that near-term predictions were always optimistic and far-future predictions pessimistic, because humans expect linear progress when most things are exponential.

Why isn't there an equal skepticism about Space Nuttery like Moon colonies, space-based solar power and asteroid mining? They are equally delusional.

No they're not, and there was plenty of skepticism about such claims when O'Neill in the 70s was proclaiming that we could be doing them all in a few years, because it was clearly technologically impossible with any reasonably justifiable amount of money. There's far less skepticism today because we can see that they could be viable in a few decades.

Possible, sure. We could go back to the moon with a big enough budget. Economically viable, though?

Solar microwave satellites were fun in SimCity 2000, and I'd still like to see them operational, but I've not seen even any proof of concept devices yet.

Further out, the big question about asteroid mining I've never seen plausibly answered is: how do you make mining bulk metal in space cheaper than mining it on Earth?

The usual space-booster response is "we won't be building stuff on earth, we'll be building st

You are a nut job. You fail to recognize that propulsion technology may render "rockets" obsolete, i.e. you assume that we only have chemical rockets for all eternity, and you overestimate the length our current economic system will last, i.e. you assume indefinitely. Either humanity will eventually colonize other places (at least in our own solar system) or we will go extinct. Its a natural progression. I bet people like you complained when one of their tribesmen built a slightly bigger ship more suitable