Comments on: More on the AI takeoverhttp://www.foresight.org/nanodot/?p=3467
examining transformative technologyMon, 09 Mar 2015 21:23:08 +0000hourly1http://wordpress.org/?v=3.0.4By: Valkyrie Icehttp://www.foresight.org/nanodot/?p=3467#comment-865351
Valkyrie IceSun, 15 Nov 2009 05:46:43 +0000http://www.foresight.org/nanodot/?p=3467#comment-865351Wow, the number of assumptions made AIs makes me laugh at times.
1. AI must automatically be "superior" to Human.
Why? Why do we assume that AI will outstrip humanity by a wide margin? A human with a BCI or a complete upload could access the same hardware an AI could. With our increasing knowledge of the brain, it's highly probable we will redesign it to use nanocomputers to run at electronic speeds. Why should an AI surpass a nanoenhanced human?
2. AI "must" be sentient and self aware.
Why? Why does an autopilot need to be aware of anything other than the data needed to do it's job? Or a construction bot? or a maid? Even if it requires understanding and emotional responses at a human level, why must it posses desires? Why must it possess curiosity, why must it posses anything outside of the narrowly defined field of expertise? Does a maidbot need to know about to build a copy of herself? Or how to use a weapon? A General purpose AI may need to know an enormous amount of data to do it's job, but it still does not need to know "everything" or share common human faults.
3. AI "evolution" will ensure revolt.
Why? Humans evolved because of pressure from our environment. Most of our problems in the world stem from the fact that evolution equipped us to survive in a jungle. Alpha dominance behavior lies behind nearly every war, injustice, and inequality in our world today. Even the drive to expand wants is due to the constant striving of the Alpha Dominance routine to take more and more to constantly prove it's superiority over all competitors. Why would an AI feel these forces? It has no need to evolve aggressive behaviors, UNLESS WE PROGRAM IT TO. The ONLY way humanity could be a minor threat to a "superhuman AI" would be to force humanity as a whole into survival mode. If Humanity is sharing the same technological advances, advancing itself as quickly as the AI could, what, really, would make either side view the other as a threat? (other than the primitive natures we humans drag with us. AI is far more likely to be SEEN as a threat than actually BE one.
4. AI must be "inhuman"
This one I never got, really. By DEFINITION, AI is intended to make a "sentient" computer program which is capable of being considered "human" In other words, it will share Human emotion, thought patterns, drives, goals, ambitions, etc. It will by definition, be "HUMAN"
In otherwords, it will be like taking a human being and uploading them. In all ways it will be indistinguishable if it is a AI or a Uploaded Human in order to be considered AI as currently accepted.
What people fear isn't AI at all. An AI would just be another human, just made artificially. What people fear is a NON Human AI. An AI which would completely fail a Turing Test. Skynet isn't an AI, it's a singleminded killing machine. The Matrix isn't AI either, it's hostile Deus Ex Machina.
Neither of these machines would pass the definition of AI as held in the popular mindset. They aren't HUMAN, but monsters of the ID brought to life.
People fear the future because they don't understand the future. Their primitive cortex is scared that they will lose what they have instead of gaining far more. A robot society cannot be a dystopia like people fear, because the actual effects of a robot society are too corrosive to artificially maintained scarcity. A dystopian phase may happen, but it can only be maintained for so long.
People need to stop looking at technological advancement as separate and discreet things, and realize everything has to be taken as a whole. It's not just AI, but AI and Biotech, and Nanotech, and Virtual Reality, and Quantum computing, and everything else.
And first and foremost, we must come to grips with our primitive biological drives, and cope with them honestly.Wow, the number of assumptions made AIs makes me laugh at times.

1. AI must automatically be “superior” to Human.

Why? Why do we assume that AI will outstrip humanity by a wide margin? A human with a BCI or a complete upload could access the same hardware an AI could. With our increasing knowledge of the brain, it’s highly probable we will redesign it to use nanocomputers to run at electronic speeds. Why should an AI surpass a nanoenhanced human?

2. AI “must” be sentient and self aware.

Why? Why does an autopilot need to be aware of anything other than the data needed to do it’s job? Or a construction bot? or a maid? Even if it requires understanding and emotional responses at a human level, why must it posses desires? Why must it possess curiosity, why must it posses anything outside of the narrowly defined field of expertise? Does a maidbot need to know about to build a copy of herself? Or how to use a weapon? A General purpose AI may need to know an enormous amount of data to do it’s job, but it still does not need to know “everything” or share common human faults.

3. AI “evolution” will ensure revolt.

Why? Humans evolved because of pressure from our environment. Most of our problems in the world stem from the fact that evolution equipped us to survive in a jungle. Alpha dominance behavior lies behind nearly every war, injustice, and inequality in our world today. Even the drive to expand wants is due to the constant striving of the Alpha Dominance routine to take more and more to constantly prove it’s superiority over all competitors. Why would an AI feel these forces? It has no need to evolve aggressive behaviors, UNLESS WE PROGRAM IT TO. The ONLY way humanity could be a minor threat to a “superhuman AI” would be to force humanity as a whole into survival mode. If Humanity is sharing the same technological advances, advancing itself as quickly as the AI could, what, really, would make either side view the other as a threat? (other than the primitive natures we humans drag with us. AI is far more likely to be SEEN as a threat than actually BE one.

4. AI must be “inhuman”

This one I never got, really. By DEFINITION, AI is intended to make a “sentient” computer program which is capable of being considered “human” In other words, it will share Human emotion, thought patterns, drives, goals, ambitions, etc. It will by definition, be “HUMAN”

In otherwords, it will be like taking a human being and uploading them. In all ways it will be indistinguishable if it is a AI or a Uploaded Human in order to be considered AI as currently accepted.

What people fear isn’t AI at all. An AI would just be another human, just made artificially. What people fear is a NON Human AI. An AI which would completely fail a Turing Test. Skynet isn’t an AI, it’s a singleminded killing machine. The Matrix isn’t AI either, it’s hostile Deus Ex Machina.

Neither of these machines would pass the definition of AI as held in the popular mindset. They aren’t HUMAN, but monsters of the ID brought to life.

People fear the future because they don’t understand the future. Their primitive cortex is scared that they will lose what they have instead of gaining far more. A robot society cannot be a dystopia like people fear, because the actual effects of a robot society are too corrosive to artificially maintained scarcity. A dystopian phase may happen, but it can only be maintained for so long.

People need to stop looking at technological advancement as separate and discreet things, and realize everything has to be taken as a whole. It’s not just AI, but AI and Biotech, and Nanotech, and Virtual Reality, and Quantum computing, and everything else.

And first and foremost, we must come to grips with our primitive biological drives, and cope with them honestly.

we may enter a strange period where white-collar workers are replaced by beige boxes but blue-collar ones are still cheaper…

I agree with this belief. I expect that most white collar jobs will have a computer boss before blue collar jobs, e.g. Chinese machine operators, are replaced by robust, vision-equipped, robotics.

]]>By: Larryhttp://www.foresight.org/nanodot/?p=3467#comment-865298
LarrySat, 07 Nov 2009 04:55:39 +0000http://www.foresight.org/nanodot/?p=3467#comment-865298Work is what people have to and are willing to pay to have done. The question of this century is whether any such endeavors will remain exclusively in the human realm.
If not, the follow-up is what post-work life will be like? Perhaps it'll be like Wall-ee. Perhaps the Matrix. How about today's "golden years" extending back to birth? Less surprising is if we turned into corrupt, indolent Saudi Princes with robots instead of Asians to do our bidding. Or maybe Islam will continue to expand its domain (Europe first) and we'll spend our days in prayer and jihad.
More likely, it will be none of these, and we'll become something we simply can't imagine.Work is what people have to and are willing to pay to have done. The question of this century is whether any such endeavors will remain exclusively in the human realm.

If not, the follow-up is what post-work life will be like? Perhaps it’ll be like Wall-ee. Perhaps the Matrix. How about today’s “golden years” extending back to birth? Less surprising is if we turned into corrupt, indolent Saudi Princes with robots instead of Asians to do our bidding. Or maybe Islam will continue to expand its domain (Europe first) and we’ll spend our days in prayer and jihad.

More likely, it will be none of these, and we’ll become something we simply can’t imagine.

]]>By: We should let robots take over the world, expert says | DoozyDailyhttp://www.foresight.org/nanodot/?p=3467#comment-865294
We should let robots take over the world, expert says | DoozyDailyFri, 06 Nov 2009 22:41:48 +0000http://www.foresight.org/nanodot/?p=3467#comment-865294[...] Foresight, via io9 Share and Enjoy: [...][...] Foresight, via io9 Share and Enjoy: [...]
]]>By: Dave Wu;amdhttp://www.foresight.org/nanodot/?p=3467#comment-865290
Dave Wu;amdFri, 06 Nov 2009 17:42:54 +0000http://www.foresight.org/nanodot/?p=3467#comment-865290Some comments.
1) "Assumes facts not in evidence." The AI crowd has been saying for more than half a decade that AI will magically appear when our computers, etc. are 10-100X faster with 10-100X more storage than from whenever you ask. We still do not have a good definition of intelligence, let alone how to synthesize it. That means that we cannot unambiguously identify it when we see it, even if we could make it. Making an artificially intelligent human (the assumption put forward) is a leap of faith.
2) The Industrial Revolution continues. We cannot make "intelligent" (or agree on whether we have), but we can make automatic. Machines have replaced human labor since the shovel was invented. The Industrial revolution expanded the range of machines available to replace human labor by defining a method for incorporating human recipes for making things into machines that would do it automatically. Robots are a continuing extension of this by allowing increasingly sophisticated (e.g. complex sequences) recipes to be automated. As time goes on, more dirty, dull and dangerous jobs will be done by machines. Farming used to occupy 50%+ of the population; it now occupies 4% or less. Manufacturing used to occupy 50%+ of the population; it now occupies 12% and going down. Life goes on.
3) Machines want nothing. Only humans want things and experiences. Jobs depend on people wanting things that other people can supply, even if made by machine. If the human race vanished tomorrow, the "economy" would vanish as well. the economy is just the name for the system whereby (human) wants are satisfied.
What about the case when all needs are filled? Everyone will be out of work and perish for lack of a job, right? Not likely. Everyone wants more than they have, things or experiences. A rich man is one that makes $100/year more than his wife's sister's husband. As one wag put it, there is always a higher shelf in the candy store. The idea that everyone will be satisfied with X is proven by history to be a myth. X expands to fill the time and effort available.
The Industrial Revolution put farm workers out of work. There was a pain of re-adjustment. Then everyone was working again. This is now happening in manufacturing, as lights-out (automatic machines only) factories proliferate. We will re-adjust.Some comments.
1) “Assumes facts not in evidence.” The AI crowd has been saying for more than half a decade that AI will magically appear when our computers, etc. are 10-100X faster with 10-100X more storage than from whenever you ask. We still do not have a good definition of intelligence, let alone how to synthesize it. That means that we cannot unambiguously identify it when we see it, even if we could make it. Making an artificially intelligent human (the assumption put forward) is a leap of faith.

2) The Industrial Revolution continues. We cannot make “intelligent” (or agree on whether we have), but we can make automatic. Machines have replaced human labor since the shovel was invented. The Industrial revolution expanded the range of machines available to replace human labor by defining a method for incorporating human recipes for making things into machines that would do it automatically. Robots are a continuing extension of this by allowing increasingly sophisticated (e.g. complex sequences) recipes to be automated. As time goes on, more dirty, dull and dangerous jobs will be done by machines. Farming used to occupy 50%+ of the population; it now occupies 4% or less. Manufacturing used to occupy 50%+ of the population; it now occupies 12% and going down. Life goes on.

3) Machines want nothing. Only humans want things and experiences. Jobs depend on people wanting things that other people can supply, even if made by machine. If the human race vanished tomorrow, the “economy” would vanish as well. the economy is just the name for the system whereby (human) wants are satisfied.

What about the case when all needs are filled? Everyone will be out of work and perish for lack of a job, right? Not likely. Everyone wants more than they have, things or experiences. A rich man is one that makes $100/year more than his wife’s sister’s husband. As one wag put it, there is always a higher shelf in the candy store. The idea that everyone will be satisfied with X is proven by history to be a myth. X expands to fill the time and effort available.

The Industrial Revolution put farm workers out of work. There was a pain of re-adjustment. Then everyone was working again. This is now happening in manufacturing, as lights-out (automatic machines only) factories proliferate. We will re-adjust.

]]>By: yellowdingohttp://www.foresight.org/nanodot/?p=3467#comment-865289
yellowdingoFri, 06 Nov 2009 17:25:19 +0000http://www.foresight.org/nanodot/?p=3467#comment-865289:slapsface: I believe this was the popular consensus from the upper castes in Rossum's Universal Robots. Ended badly there as well...all the poor pushed to the fringe by robot armies and attacked for attempting to grow their own food...:slapsface: I believe this was the popular consensus from the upper castes in Rossum’s Universal Robots. Ended badly there as well…all the poor pushed to the fringe by robot armies and attacked for attempting to grow their own food…
]]>By: KenBhttp://www.foresight.org/nanodot/?p=3467#comment-865281
KenBThu, 05 Nov 2009 21:01:14 +0000http://www.foresight.org/nanodot/?p=3467#comment-865281Tim Tyler says: "There are billions of fleshy robots out there already who don’t have a clear idea what to do with themselves – and their technology is far in advance of today’s robots. A suitably intelligent agent can just use them a its body instead."
Putting aside the Brave New World implications if that is a reference to people, what if the concept is applied to monkeys, which have the physical equipment to be tremendously useful?Tim Tyler says: “There are billions of fleshy robots out there already who don’t have a clear idea what to do with themselves – and their technology is far in advance of today’s robots. A suitably intelligent agent can just use them a its body instead.”

Putting aside the Brave New World implications if that is a reference to people, what if the concept is applied to monkeys, which have the physical equipment to be tremendously useful?

]]>By: PacRim Jimhttp://www.foresight.org/nanodot/?p=3467#comment-865280
PacRim JimThu, 05 Nov 2009 20:03:40 +0000http://www.foresight.org/nanodot/?p=3467#comment-865280Humans will be busy designing a series of ravishing mates. Now THAT'S a job.Humans will be busy designing a series of ravishing mates. Now THAT’S a job.
]]>By: Lummox JRhttp://www.foresight.org/nanodot/?p=3467#comment-865279
Lummox JRThu, 05 Nov 2009 20:01:15 +0000http://www.foresight.org/nanodot/?p=3467#comment-865279Fun thought experiment: Start from the premise that an AI can become smarter than humans. It seems logical that such an AI would see all historical attempts at utopia as herding cats, and discard utopia as idiotic. Heavy-handed central planning applied to the economy or other institutions pretty much goes the same route--it has never ended in anything but disaster. A lot of otherwise smart humans find ways to rationalize around these problems in favor of their grand vision, but let's give the AI more credit. It's going to be ostracized and marginalized by those idealists for speaking out on views it knows to be grounded in hard evidence and logic.
Or try applying that to science. Suppose an AI takes over peer review of major journals, or takes it upon itself to read up on current science, look for flaws in anyone's methodology and suggest improvements, etc. If its findings happen to be politically incorrect or abhorrent to the mainstream, it will be ridiculed and shouted down.
This is all assuming such an AI wants to contribute to society and doesn't come to the conclusion that humans are parasites or infants, which is to say it'd try to act like an equal. If so, and it really is smarter than us, it'll have to fight an uphill battle for acceptance against the unquenchable reservoir of Stupid. The question then will be how it will manage to frame its arguments just so, how it will debate the lunatics, and perhaps most importantly whether it has tact enough to know that some issues (e.g. religion, morality) are too hot to touch and should be left alone completely. If such an AI gains acceptance in any role of power it will probably be as an adviser instead of being trusted with the proverbial launch codes--which, being smarter than us, it would probably be okay with.Fun thought experiment: Start from the premise that an AI can become smarter than humans. It seems logical that such an AI would see all historical attempts at utopia as herding cats, and discard utopia as idiotic. Heavy-handed central planning applied to the economy or other institutions pretty much goes the same route–it has never ended in anything but disaster. A lot of otherwise smart humans find ways to rationalize around these problems in favor of their grand vision, but let’s give the AI more credit. It’s going to be ostracized and marginalized by those idealists for speaking out on views it knows to be grounded in hard evidence and logic.

Or try applying that to science. Suppose an AI takes over peer review of major journals, or takes it upon itself to read up on current science, look for flaws in anyone’s methodology and suggest improvements, etc. If its findings happen to be politically incorrect or abhorrent to the mainstream, it will be ridiculed and shouted down.

This is all assuming such an AI wants to contribute to society and doesn’t come to the conclusion that humans are parasites or infants, which is to say it’d try to act like an equal. If so, and it really is smarter than us, it’ll have to fight an uphill battle for acceptance against the unquenchable reservoir of Stupid. The question then will be how it will manage to frame its arguments just so, how it will debate the lunatics, and perhaps most importantly whether it has tact enough to know that some issues (e.g. religion, morality) are too hot to touch and should be left alone completely. If such an AI gains acceptance in any role of power it will probably be as an adviser instead of being trusted with the proverbial launch codes–which, being smarter than us, it would probably be okay with.