David Brin's latest novel, and a TED talk

Nobody knows nothin' but this advanced tech and cures that you hear about can't be all national inquirer hype. Economic neccessity is the mother of all invention and creation. My personal theory is that women will keep us from going to space, as in 'I think that's enough reading. Put down that book and let's make love.' And magic is only stuff that we cn't imagine and see now. Did anybody realize during Beethoven of Schubert's time what important composers they would be. They were despised. Did anybody know what a light bulb was before Edison, or a record player or a movie or what a Klingon was before Rodenberry. I'm sure prophet's gazed at the stars and drew spacemen in suits on cave walls and built the pyrimids God knows how - that technological knowledge was lost it seems. My father told me a sc-fi story once of an astronaut who went to the moon and peered inside a cave to see a strange green light and when he came back to Earth many years later he went to a musuem and saw a painting by Davinci that showed that same strange green light inside a cave so even time could be relative to the vast strangeness of not only the outer universe, but the inner one as well.

Yeah, well, when it's history and fact come talk to me about it. Until then it's as likely as the Rapture, which you can also come talk to me about when it's history and fact.

Click to expand...

See but it can't be history because it hasn't happened yet(should be obvious right?)...I think that's a weak argument...in fact all the argument's against a singularity are pretty weak...first, while I have already conceded the ultimate end results are uncertain, but if I postulate accelerated change that leads to "runaway" AI, and this seems to be the thing most people ARE certain about, it then follows this AI will reach a saturation point. AI experts, then others assumed supplanting human intelligence would mean a change in evolution where the smarter beings would then replace the others.

The most common arguments against the possible singularity:

1. Knee-jerk human centrism, we are "unique": Neither machine or human derived AI are proper human beings, so they must reject the notion they could exist. We have also discovered we are not necessarily unique in almost every human endeavor to the natural world...planets exist everywhere and many might contain life. We are also no longer the center of the universe.

Click to expand...

That's certainly a strawman. Humans are "unique" in the sense that we don't know of any species like ourselves, but not unique in the sense something like us couldn't exist elsewhere or be created artificially.

It's not that the human brain is too complex, it's that it works in a totally different way from all our modern computing technology. You may think this is a minor hurdle, but it isn't. It takes exponentially more digital computing power to simulate an analog brain than a genuine analog brain requires. They work completely differently. It's not a question of complexity, but mechanics.

Come on, those kinds of arguments are so silly as to not even need rebutting. But I'm noticing a lot of your points are really just strawmen so you can make it look like there's no credible criticism of the Singularity hypothesis.

It didn't take a lot of effort to convince me. Another strawman.

Where's the exponential growth in AI? I rest my case.

The bottom line is, the Singularity as described requires strong AI. It doesn't exist. It's been "just around the corner" for decades. Instead, we've only been able to come up with expert systems, nothing we'd call a self-aware intelligence. We don't even know how to do this, because we don't know how human consciousness works.

It's not a question of how powerful our computers are. There seems to be this assumption that, if we simply make a computer powerful enough and feed it tons of information, it will become self-aware and intelligent. There is no reason to believe this. Computers are inherently deterministic. They don't just magically do things without being told to, unless they have flawed hardware/software.

Now, you could make the more existential argument that a facsimile of human consciousness is indistinguishable from the real thing, but in that case you should get back to us when you've seen one.

6. It's doomsday: Only if you assume #1. You can look at it two ways, that advanced human AI is a great evolutionary step, or, if we are cast aside through indifference, or possibly even war, then the machines will be representatives of a past human culture. I don't find either too horrible really, though the latter is not my preference.

Click to expand...

That sounds like a pretty fringe notion.

7. There is no infinite growth: You don't need infinite growth for supra-intelligent AI to man. Exponentials do indeed come to an end, but the number we are talking about are more than enough for the next 50 years for the singularity to take place.

Click to expand...

Who argued for infinite growth? "Infinite growth" is an oxymoron, anyway. We live in a universe of physical limits, "infinite" anything is physically impossible. Again, a strawman.

I realize there are many people who are in love with the idea that we'll all be Singularity transhumans within our lifetime, but there is no good reason to believe this. While our materials technology and chemistry are highly advanced, and I have no doubt medical technology will totally blow us away in the decades to come, our computing technology remains more or less unchanged since its inception. We still use binary digital systems with processors made up of transistors and volatile storage for memory. We've improved the scale immensely, but the basic mode of operation is the same. And building AI using this technology, in the sense most people think of it, has been a research dead-end since at least the '60s.

I have no doubt we will have some awesome technology in the decades to come, but strong AI? Without some major breakthrough in computing technology, it's not happening in our lifetimes. And as far as I understand it, the Singularity hinges very much on the existence of strong AI.

Click to expand...

No, quite honestly when I see rebuttals on websites, these are the best criticisms they can come up with! Yes, they are pretty easy to shoot down, as they have been many times before.

People keep saying the time frames are wrong, but as I said, they don't take into account accelerating returns...as I posted before, 50 years for monkey level AI, far faster than human evolution, you really think the explosion of info tech won't increase the rate of AI advancement? I think it's those who claim strong AI is not possible for a 100 years or more who will be quite surprised.

No good reason to believe in SIngularity transhumans within our liftetimes?? Well that's the crux of the argument isn't it! Both AI/cyberneticists posited the timeframe before those who posited most of the transhuman elements that have been popularized now. The timeframe that human level AI and world level AI fit neatly into the time period. Some, like Vernor Vinge, seems to feel the timeframe is even earlier, happening in the 2030 range.

Infinite growth: in fact this is one of the most common arguments against the Singularity. I read it in almost every counter-article. The idea probably originates in the fact that runaway AI reaches a level we can't predict, which to many is judged as "infinite". I've also read the related criticism quite a bit: that exponentials end...I believe I answered that adequately in my earlier post.

Computers eventually will not just do what we tell them to, they will learn, your assumption here is completely incorrect even in 2012. By definition, if they surpass human intelligence, they should have the capacities that we do, but at much greater speeds also. See the link below.

There is a huge difference between just saying strong AI is around the corner, and predicting it based on mathematics of computing power and speed.

It should be possible. Even naive simulation of the whole human brain would cost just $40 per hour:

Ray answers (well...destroys really)Paul Allen's criticisms, pay particular attention to the part about redundancy in design information in the human brain that makes it easier to replicate than first thought.

Extrapolation is not "fact". It's well informed guessing at most. It can be wrong.

So, in the 90's, I would turn on my computer, get on line. Go to start or an icon on my desktop and open a browser to surf. Content was generated by html that was either static or generated by PHP/JAVA/Flash. I would browse information sites and social sites and download media. And today? I do it faster. Yep, big difference.

Now, go back another ten years and you see a huge difference. No browsers, no widespread internet access, BBS's with local phone numbers rule the day. Win3.1 if you're lucky.

Click to expand...

Exponential technology in many areas is fact...that's what I was referring to. What amazes me is that we are compressing the time technology changes to the point where many ordinary people are noticing it, this is a huge change. It is no longer just generational(that in itself was a huge change of the last 200 years..a mere speck of human evolution). While reflecting on capabilities of my 2006 era flip phone and my EVO Shift last year, I was amazed at the changes, its like night and day! Not just processing power but capabilities of apps. 6 years!

If you think there are no differences from IE 2 and my current Chrome of Firefox then you really haven't been paying attention. I do realize some people want to remain firmly rooted in safe observances of past change.

Are you talking Lawnmower Man 3 here? Not that I understood 2 at all. Or Demon Seed/Collossus: The Forbin Project? Or Frankenstein plus we may be due for another backwards step via a world war to ultimately perfect man for sure. There may be higher powers on Earth preventing it from happeneing as well - like time travellors or women in general - a hot flash for a woman president might be akin to a thermo-nuclear explosion.

Click to expand...

Some of the machine takeover movies are good in their own right, but they are still kind of unimaginative and don't really ask any of the important questions...they only take one view of the AI reaching human levels at all. The Matrix series is probably the most complex of the machine takeover genre, as far above Terminator in the conceptual scale as can be and goes beyond the usual pedestrian cautionary tale like Roboapocalypse. The humans even debate the machine designers! There is a negotiation in the end and an uneasy peace. There are traces of Han Moravec's speculations, tons of transhumanism elements, even philosophical questions of existence.

If you think there are no differences from IE 2 and my current Chrome of Firefox then you really haven't been paying attention. I do realize some people want to remain firmly rooted in safe observances of past change.

Click to expand...

No, you're exaggerating the virtues of the few basic improvements that have been made in Internet browsers in order to support your largely-unsupported religious faith. Browsers themselves, with their clumsy load of legacy code and design are a good example of one of Lanier's basic criticisms of cybernetic totalism: since human beings have shown no evidence of being able to write the kinds of software that would make strong A.I. possible, it's necessary for evangelists to posit a magical moment at which computers will somehow begin to write their own software and create their own successors.

There is no reason based in evidence to expect this to happen soon, if ever.

So, what is this assumption based on? Faith. Wishful thinking. Nothing more, no observations drawn from history or the real world.

Attempting to use the applicability of Moore's law to computer hardware as a starting assumption and basis for extrapolating a similar exponential growth and evolution in processes of a different sort and order exposes the essential laziness in the thinking of Kurzweil and his ilk. As others have pointed out, it's worth considering that biological evolution (the touchstone model here) has not, despite a head start of billions of years, stumbled onto the "algorithms" which would support exponentially accelerating change of this kind (despite the self-evident utility of such for the adaptation and survival of living forms).

The basic presumption underlying groundless faith in the Singularity is that it will happen now because we're special and living in a special time. Also, we don't want to die. As someone said, it truly is "the Rapture for geeks."

If you think there are no differences from IE 2 and my current Chrome of Firefox then you really haven't been paying attention. I do realize some people want to remain firmly rooted in safe observances of past change.

Click to expand...

No, you're exaggerating the virtues of the few basic improvements that have been made in Internet browsers in order to support your largely-unsupported religious faith. Browsers themselves, with their clumsy load of legacy code and design are a good example of one of Lanier's basic criticisms of cybernetic totalism: since human beings have shown no evidence of being able to write the kinds of software that would make strong A.I. possible, it's necessary for evangelists to posit a magical moment at which computers will somehow begin to write their own software and create their own successors.

There is no reason based in evidence to expect this to happen soon, if ever.

So, what is this assumption based on? Faith. Wishful thinking. Nothing more, no observations drawn from history or the real world.

Attempting to use the applicability of Moore's law to computer hardware as a starting assumption and basis for extrapolating a similar exponential growth and evolution in processes of a different sort and order exposes the essential laziness in the thinking of Kurzweil and his ilk. As others have pointed out, it's worth considering that biological evolution (the touchstone model here) has not, despite a head start of billions of years, stumbled onto the "algorithms" which would support exponentially accelerating change of this kind (despite the self-evident utility of such for the adaptation and survival of living forms).

The basic presumption underlying groundless faith in the Singularity is that it will happen now because we're special and living in a special time. Also, we don't want to die. As someone said, it truly is "the Rapture for geeks."

Click to expand...

Very well-put.

When I was thinking about this last night, I realized the same thing you posited at the end: that it's not unlike Christian apocalypticism. On top of being a belief that we live in a "special time," it also seems to be used as a magic wand to wave away the urgency of the present world's problems. There's no reason for people to worry about climate change or the energy crisis or the exhaustion of finite resources; the Singularity will happen soon enough and all our problems will be solved. It is the height of intellectual laziness.

We assume our problems will be solved in time because we've managed to get through so far. But, as they say in every Wall Street prospectus, "past performance is no guarantee of future returns." I was working on a longer post to address RAMA's specific points, but since Dennis summed it up so well, I'll just cover the part I think is the greatest weakness in the assumptions of Singularity prophets.

Kurzweil's point about accelerating returns makes no sense. It's quite a leap to compare the physical laws of the universe (which do not change over time) with the outputs of human processes. He dismisses the notion that "laws work until they don't," but it happens to be true.

You know what's fueled the past couple hundred years of human advancement? Fossil fuels. I'm not here to give you the Gospel of Hubbert and talk about "peak oil," but the fact remains that the industrial and technological boom we've experienced since the Industrial Revolution has come from the consumption of a finite resource that we will eventually exhaust. The EROI (energy return on investment) of fossil fuels is higher than anything else we have apart from nuclear, which has its own set of difficult problems. You cannot extrapolate accelerating, exponential growth infinitely into the future when it depends on resources that are finite and have no practical replacements. It is also not safe to assume research pressures will inevitably solve the problem.

Put simply, it's not just about technological research, it's about the characteristics that fundamentally underpin the progress of human civilization--and the heart of that is energy, energy which is becoming harder and harder to obtain, and the cost of which is increasing both in financial terms and environmental impact. We've been accelerating the use of fossil fuels for 200 years, and when they're gone, they're gone--do you really think the pace of technological development and deployment will continue unabated once we hit that particular brick wall?

Finally, what Dennis said about wishful thinking with regard to computers writing their own software is absolutely true. Do we have code generation today? Yes. Is it anywhere near as fanciful as guys like Kurzweil think it is? Hell no. It is nothing but a shortcut so developers can get away from writing the same tedious code over and over; it's not a magic bullet that lets computer systems write themselves. We've also been toying with genetic algorithms in computer science for over 50 years, and while they're useful in some very limited problem domains, they don't bring us anywhere near this notion of generalized AI. The assumption that simulating a human brain will result in generalized AI is also totally faulty, mainly because we so poorly understand how the interactions of neurons and chemicals results in the properties we see as intelligence and consciousness. How are you going to simulate something you don't even understand?

That's the crux of it - that's where the evangelists are fatally lazy and followers like RAMA (in the grand tradition of apocalyptic religionists) incurious and unquestioning. What they're really positing is that we don't have to understand how to do it, because "understanding" is just about to blossom forth from the machines themselves.

Somehow.

The ways of animal and human behavior aren't based in intelligence per se but in the layering and idiosyncratic interaction of late-evolving systems over older "legacy" systems in a process which has developed to maintain internal homeostasis against inexorable decay and entropy.

^^^I gather it was Ken MacLeod who termed the Singularity "rapture of the nerds."

Click to expand...

There's no genuine reason to draw a categorical distinction between "expert systems" and "AI." It is quite true that there's no real reason to expect that computers will automatically write their own code for some sort of human intelligence somehow trapped in a quadriplegic, deaf/dumb/mute hell inside a CPU. But AI should really be read as "Alien Intelligence," (the only alien intelligence we are likely ever to really encounter, though the possibility of the flesh and blood [?] kind exists tenuously enough to serve as foundation for fiction.) An expert system that is expert enough will serve as an intelligence.

This conception of a "Singularity" really can be extrapolated from Moore's Law etc., though I think the proponents neither mean this nor are extrapolating prudently. An expert system/intelligence that can simulate a human personality is another question, taking us right back to the difficulty of modeling something you don't understand. Analog models can do this, however. Neural networks can be interpreted as doing this algorithmically (without human understanding.)
But here the Singularity advocates seem to be forgetting the distinction between a simulation and the original.

Downloading minds is basically the notion that your soul can be put in a bottle. Taking umbrage at the notion of copying minds smacks of being angry at the hubris of thinking man can create a soul. If God is truly that offended, however, surely He can defend His honor Himself.

The problem with "downloading minds" isn't one of "taking umbrage" but that it's the equivalent to "bottling unobtainium" - you've got a verb acting there on a noun that stands in for something with poorly or undefined characteristics and which there's little reason to think actually exists as an entity.

One doesn't download a thing, anyway - one downloads a copy of it. That a copy of me may exist after my death is of no actual interest to me, or at least not of as much appeal as the hope that my children will exist after my death.

The term "intelligence" has always meant "human level". IBM's Watson and his brothers are probably the most advance expert systems on the planet but they lack two key features that make them inferior to humans.

1. They lack the ability to create novel ideas or concepts

2. They lack the ability to make intuitive leaps of logic or to make a conclusion without clearly set parameters aka they lack a "gut".

But here the Singularity advocates seem to be forgetting the distinction between a simulation and the original.

Downloading minds is basically the notion that your soul can be put in a bottle. Taking umbrage at the notion of copying minds smacks of being angry at the hubris of thinking man can create a soul.

Click to expand...

Ignoring context is arrant nonsense and arrogance, a two-fer no one should want.

Also, it is still simply an error to insist that the "mind" has to be understood to be modeled. One may insist that one doesn't believe in "souls" but if they hold a concept that has supernatural powers, such as ineffability, no one else is required to take the claim seriously.

(Technically, one might claim that the "mind" is a QM phenomenon, hence uncopiable. Or one might claim that the "mind" is inseparable from the activity of the body and uncopiable for about the same reasons as one could not copy a flame. But no one has made such claims, have they?)

I believe RAMA et al. are wrong, but insisting they are stupid or crazy is symptomatic, not common sense.

The term "intelligence" has always meant "human level". IBM's Watson and his brothers are probably the most advance expert systems on the planet but they lack two key features that make them inferior to humans.

1. They lack the ability to create novel ideas or concepts

2. They lack the ability to make intuitive leaps of logic or to make a conclusion without clearly set parameters aka they lack a "gut".

They also lack emotions but it's debatable if a true AI needs them.

Click to expand...

1. Most people do not create novel ideas or concepts: They restructure old ideas or concepts into new configurations. It is a question to what extent genuinely novel ideas or concepts exist.

2. Logic is not intuitive, nor does it take leaps. Conclusions can be made with fuzzy parameters, but there's no reason to want expert systems to do what people can do (at this point in time) better. What people call "gut," is not a faculty but a tag for unconscious thinking based on experience.

Emotions are the motors of thought. Expert systems and AI are not like human thinking precisely because they do not have human emotions. What will serve as drivers to compel programs to the necessity of making intuitive leaps, then drawing conclusions upon unclear parameters is unkown. Which is why if and when such programs are devised, they will surely have "emotions" unlike those of human beings. Hence, as I said, AI should be read "alien intelligence."

Another way of putting it is to recall that "consciousness" can be substituted with "point of view." How can a CPU have a human point of view? How can its sensorium possibly be like that of a hairless primate?

But here the Singularity advocates seem to be forgetting the distinction between a simulation and the original.

Downloading minds is basically the notion that your soul can be put in a bottle. Taking umbrage at the notion of copying minds smacks of being angry at the hubris of thinking man can create a soul.

Click to expand...

Ignoring context is arrant nonsense and arrogance, a two-fer no one should want.

Also, it is still simply an error to insist that the "mind" has to be understood to be modeled. One may insist that one doesn't believe in "souls" but if they hold a concept that has supernatural powers, such as ineffability, no one else is required to take the claim seriously.

(Technically, one might claim that the "mind" is a QM phenomenon, hence uncopiable. Or one might claim that the "mind" is inseparable from the activity of the body and uncopiable for about the same reasons as one could not copy a flame. But no one has made such claims, have they?)

I believe RAMA et al. are wrong, but insisting they are stupid or crazy is symptomatic, not common sense.

Click to expand...

Who said they were "crazy" or "stupid"? I didn't, Dennis didn't. We just think this Singularity prophecy business is not so much based in science but hopeful flights of fancy.

I'm of the mind (ha!) that the mind is an emergent property of the neuroelectrochemical processes of the human brain. There may very well be a QM component--we just don't know. What we do know is that the "mind" doesn't just happen, it is the result of years of cognitive development and feedback processes. A newborn baby doesn't have a mind as we understand it--it is not self-aware, and it understands nothing beyond its basic biological urges. As its brain matures and develops, as it experiences and explores the world, it eventually gets a sense of what it is--that is, a thinking, feeling, person of will. Toddlers can't articulate this, but they demonstrate it through their behavior.

So, to replicate this, do we just need to simulate roughly the same number of neurons, throw some sensory equipment at it, and let it run for a few years? Will that give us something similar to a human mind? I suppose it's a possibility, but for the most part I doubt it. Likewise, I don't think we are anywhere near copying a mind. Sure, I guess you could map a given brain's neurons and their connections and replicate that in a computer, but there is no reason to believe that would give you something resembling the functioning mind of a person.

The term "intelligence" has always meant "human level". IBM's Watson and his brothers are probably the most advance expert systems on the planet but they lack two key features that make them inferior to humans.

1. They lack the ability to create novel ideas or concepts

2. They lack the ability to make intuitive leaps of logic or to make a conclusion without clearly set parameters aka they lack a "gut".

They also lack emotions but it's debatable if a true AI needs them.

Click to expand...

1. Most people do not create novel ideas or concepts: They restructure old ideas or concepts into new configurations. It is a question to what extent genuinely novel ideas or concepts exist.

Click to expand...

You're using too narrow a definition of "novel." In this context, "novel" doesn't mean "something not discovered before by anyone," but rather "something just discovered by me." A computer can be set to go out and grab all the information on the Internet, but does it understand any of it? No. Can it draw conclusions based on it? Only to the extent that it has algorithms written specifically to do so, but again, there is no understanding.

2. Logic is not intuitive, nor does it take leaps. Conclusions can be made with fuzzy parameters, but there's no reason to want expert systems to do what people can do (at this point in time) better. What people call "gut," is not a faculty but a tag for unconscious thinking based on experience.

Emotions are the motors of thought. Expert systems and AI are not like human thinking precisely because they do not have human emotions. What will serve as drivers to compel programs to the necessity of making intuitive leaps, then drawing conclusions upon unclear parameters is unkown. Which is why if and when such programs are devised, they will surely have "emotions" unlike those of human beings. Hence, as I said, AI should be read "alien intelligence."

Another way of putting it is to recall that "consciousness" can be substituted with "point of view." How can a CPU have a human point of view? How can its sensorium possibly be like that of a hairless primate?

Click to expand...

The emotional factor is a good point. But I think the assumption is that by simulating a human brain, you end up with a human mind, which implies emotions. Again, it's a handwave.

I think, if we did turn our decisionmaking over to a highly advanced computer, we wouldn't like the sorts of decisions it would make. (See just about every science fiction book/film that deals with AI for examples, heh.)

I'm of the mind (ha!) that the mind is an emergent property of the neuroelectrochemical processes of the human brain. There may very well be a QM component--we just don't know. What we do know is that the "mind" doesn't just happen, it is the result of years of cognitive development and feedback processes. A newborn baby doesn't have a mind as we understand it--it is not self-aware, and it understands nothing beyond its basic biological urges.

Click to expand...

Again it comes down to defining words like "mind," "intelligence" and so forth. The baby - all beings, in fact - possess minds in the Buddhist use of the term. What emerges in human beings is mainly the ego construct, that internal model which as you say is self-aware: it imagines itself as an entity moving within the context of environment and holds to the delusion that it more or less orders and controls events.

A lot of AI research used to concentrate on replicating the ego - it's what would really be examined by the Turing test. Is self-awareness actually intelligence? Forget Zen; even some behavioral psychologists would debate that.

I'm of the mind (ha!) that the mind is an emergent property of the neuroelectrochemical processes of the human brain. There may very well be a QM component--we just don't know. What we do know is that the "mind" doesn't just happen, it is the result of years of cognitive development and feedback processes. A newborn baby doesn't have a mind as we understand it--it is not self-aware, and it understands nothing beyond its basic biological urges.

Click to expand...

Again it comes down to defining words like "mind," "intelligence" and so forth. The baby - all beings, in fact - possess minds in the Buddhist use of the term. What emerges in human beings is mainly the ego construct, that internal model which as you say is self-aware: it imagines itself as an entity moving within the context of environment and holds to the delusion that it more or less orders and controls events.

A lot of AI research used to concentrate on replicating the ego - it's what would really be examined by the Turing test. Is self-awareness actually intelligence? Forget Zen; even some behavioral psychologists would debate that.

Click to expand...

Quite true. We could have a long discussion about the "ego" aspect--what some call a "user illusion." It is the aspect of the human mind we understand least, but I think the one most essential to duplicate in order to have a machine expand beyond its own programming, purely of its own volition.

Who said they were "crazy" or "stupid"? I didn't, Dennis didn't. We just think this Singularity prophecy business is not so much based in science but hopeful flights of fancy.

Click to expand...

Somehow I got an impression of contempt and derision, backed up by few arguments (some wrong.)

What we do know is that the "mind" doesn't just happen, it is the result of years of cognitive development and feedback processes.

Click to expand...

The feedback processes in the human mind rewire the brain. Neural networks do this in a sense, which is why they were regarded with such fascination by AI researchers. But at this point, very little is allowed/attempted in the way of self-reprogramming. Additionally, the feedback in human minds pertains to the effects of their actions on the environment in pursuit of their goals. As yet very few programs are given general goals and their abilities to affect their environment are quite limited. The thing the Singularity believers do have right is that advances in processing speed and memory capacity and in programming will indeed increase this kind of feedback. And they are very likely right the increase will be exponential, at least to begin with. It is reasonable to suspect the results will be incalculable.

So, to replicate this [mind]...

Click to expand...

How does replicating human minds come into this? As expert systems become more and more expert, interacting with more and more of their environment in pursuit of more and more goals, an expertise indistinguishable from intelligence is likely to emerge, just as it does in human minds. Not human personality, mind you, but intelligence.

Likewise, I don't think we are anywhere near copying a mind.

Click to expand...

You do realize that the Singularity idea entails the idea that a lot of these blue sky notions are put into effect by something smarter than us?

You're using too narrow a definition of "novel." In this context, "novel" doesn't mean "something not discovered before by anyone," but rather "something just discovered by me." A computer can be set to go out and grab all the information on the Internet, but does it understand any of it? No. Can it draw conclusions based on it? Only to the extent that it has algorithms written specifically to do so, but again, there is no understanding.

Click to expand...

Yes, it's true that AI does not yet exist. This point seems to be an assertion that computer programs can't copy the ineffable mind. But I don't know what specifically is ineffable about the mind.

But I think the assumption is that by simulating a human brain, you end up with a human mind, which implies emotions.

Click to expand...

Why would you simulate a human brain, except to study the brain? If you succeeded, would it be murder to turn off the program? Or, as seems more likely, would creating a mind which is essentially a deaf, dumb, mute quadriplegic be a shockingly cruel thing to do? I think any real AI would be remarkably alien from human intelligence. Perhaps it would not be self-aware in our sense at all. I don't think the Singularity people are extrapolating correctly but nor do I see how such possibilities can be dismissed solely as flights of fancy.

PS Forgot to actually say "something smarter" could include quantum computing. If they can make that fly, Moore's Law would be an understatement.

1. Most people do not create novel ideas or concepts: They restructure old ideas or concepts into new configurations. It is a question to what extent genuinely novel ideas or concepts exist.

Click to expand...

First you're missing the point that most people don't create new ideas but ABSOLUTELY NO expert system can.

Second expert systems can only reconfigure ideas under narrow parameters. For example an expert system could write music that sounds like Bach or Beethoven because their style is well known but it can't create new music nor can it create new genre's of music.

Creativity may be one aspect of human intelligence we can't emulate simply because it can't be quantified.

2. Logic is not intuitive, nor does it take leaps. Conclusions can be made with fuzzy parameters, but there's no reason to want expert systems to do what people can do (at this point in time) better. What people call "gut," is not a faculty but a tag for unconscious thinking based on experience.

Click to expand...

Actually intuition is very important. Often time we don't have enough information to make an informed decision therefore we "guess" and hope for the best. You might not think intuition isn't important but remember without we would be paralyzed. You could emulate intuition with a random number generator but then it wouldn't be intelligence.

Wow, I simply don't have time at the moment to reply to each point in all of the posts since the other day, however, I've responded to probably 95% of the points in the science and technology forum. I don't see any new criticisms that haven't been answered by me or through links, and some of the same mistakes keep getting repeated. Some of the main claims against the possible Singularity from my top list are named again, ie: that humans are unique and our thinking and "souls" cannot be reproduced. Call it reductionist, call it what you will, but I have no belief in souls, and do not think there is anything we can't eventually learn about the brain that will lead to reproducing it.

One quick word, it appears to me that the critics are the ones using the religious terms, such a prophecy, rapture, et al. to make comparisons. Those terms mean nothing to me, I'm simply interested in where we are headed, and since our technology and precision with predictions of events has improved over the years, we are becoming more accurate...we have an idea where we will go...it is not based on faith requiring nothing as with religion, it is based on numbers, and supported speculation. Huge companies, industries, education, are on board making it happen. Technophilanthropists are making it happen. Info technology increasing at a proven accelerating rate is making it happening.

The term "intelligence" has always meant "human level". IBM's Watson and his brothers are probably the most advance expert systems on the planet but they lack two key features that make them inferior to humans.

1. They lack the ability to create novel ideas or concepts

2. They lack the ability to make intuitive leaps of logic or to make a conclusion without clearly set parameters aka they lack a "gut".

They also lack emotions but it's debatable if a true AI needs them.

Click to expand...

AI still has a long way to go(in a linear apparent sense, not exponential sense), but there are already AI that learn, one of the links I posted names several of them. I certainly think that human intelligence, directed towards a specific task, then employing faster more capable machines on the issue can actually improve on our natural evolution (as it demonstrably has already) to create the human level AI that will have human level abilities...however, I need to stress such things are not strictly necessary for the Singularity!!

First you're missing the point that most people don't create new ideas but ABSOLUTELY NO expert system can.

Click to expand...

No, my point was that creativity isn't some mystical apprehension of the occult. Human creativity derives from fairly mundane processes. There is no reason given so far to decree that these processes or homologues for them cannot be recreated by nonorganic systems.

Second expert systems can only reconfigure ideas under narrow parameters....Creativity may be one aspect of human intelligence we can't emulate simply because it can't be quantified.

Click to expand...

As I've said (twice I think,) you don't have to understand something to model it. But don't feel bad: You're not the first who didn't read closely.

Actually intuition is very important. Often time we don't have enough information to make an informed decision therefore we "guess" and hope for the best. You might not think intuition isn't important but remember without we would be paralyzed. You could emulate intuition with a random number generator but then it wouldn't be intelligence.

Click to expand...

Oh, I think intuition is very important in practice. But I don't think it is intrinsically different from boring, step-by-step thinking. It's "merely" automatic thinking, in which we don't consciously articulate reasons for our conclusions, don't consciously trace the path of thought. I think intuition selects the most probable choice, though, so I don't think a random-number generator would be useful in devising an equivalent to human intelligence.

I must say that the notion that a random number generator wouldn't be "human intelligence" suggests an implicit notion that human-level intelligence requires a human-type consciousness. Seriously, since so much human intelligence isn't aware, why is it so necessary to insist on this?

As I said before, "consciousness" can be substituted by "point of view." The supposed illusion of consciousness can somehow enable a person with closed eyes to identify the location of a limb and communicate this to another human. Quite aside from the peculiar ability of an illusion to generate true information, if I had written "The supposed illusion of a point of view can somehow...." you'd have supposed I'd lost my mind. Now, a human needs its neural homunculus, it needs its point of view to regulate its body and navigate the world. Why would a CPU need a point of view?

Possibly the most important practical obstacle in the long run to AI is our desire that programs do only limited things, i.e., the tasks we want done. By our standards, if in a world of thousands of expert systems embodied in furniture, houses, implants constantly interacting in increasingly complex tasks, programs that start to rewrite themselves to achieve whatever bizarre aims emerge from this chaotic mass would be ghosts in the machine, fit only for exorcism.

PS The famous Ken MacLeod quote (which keeps getting modified because it's hard to distinguish nerds, geeks, etc.) is from his first novel, The Star Fraction. It's volume one of his Fall Revolution series, which so far as I know is unique for having two separate endings.