Posted
by
Soulskillon Wednesday September 09, 2009 @09:03AM
from the i-have-no-mouth-and-i-must-scream dept.

Al writes "MIT neuroscientist Ed Boyden has a column discussing the potential dangers of building super-intelligent machines without building in some sort of motivation or drive. Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.' He also notes that the complexity and uncertainty of the universe could easily overwhelm the decision-making process of this intelligence — a problem that many humans also struggle with. Boyden will give a talk on the subject at the forthcoming Singularity Summit."

Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'

This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'

This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

Plus there's the whole issue of "motivation" implying "free will". Which we probably would have no reason to implement, if we even understood it well enough to be able to implement it.

If you were a paranoid android you probably wouldn't do much more than play computer games. I mean, with a brain the size of a planet, but all you get asked to do is transport some morons to the bridge, it doesn't seem like there is much meaning in life at all.

Not really, that's a confusion of levels. People who don't believe that humans have free will still refer to motivation when getting their juniors to do something. Whether we have free will or not, it's part of our mental model of how other minds work. The question of free will is one of whether we can change motivation or merely observe it. It has predictive power over what happens in the "black box" of other minds, regardless of whether it's an accurate model of how those minds really work.

Exactly. It's hard to even contemplate intelligence or consciousness without the concept of free will. I don't think you can have analytical thought, self-awareness, self-reflection, creativity, etc. without free will. Even the lower forms of intelligence associated with other animal species, like dogs, cats, cows, pigs, etc., require free will or free thought to some extent. Otherwise, you'd simply have an animal that just sits there idly until someone gives it a set of instructions to follow—much like modern, decidedly unintelligent, computers/robots.

On the other hand, it's debatable whether there really is such a thing as "free will" as most people think of it as. That is, most people assume they have the power of self-determination. They make their own decisions based on their own "free will." But time and time again this assertion has proven to be false.

A good example of this was a study conducted on how music influenced wine shoppers [mindhacks.com]. The results of this study were interesting, not because it found that playing German music in the store boosted sales of German wines while French music boosted sales of French wines, but rather because of how the shoppers explained their wine choices. Nearly every shopper perceived their wine selection as a personal choice free from external influences, and barely 2.5% of the shoppers even mentioned the PA music in their decision-making process. However, the fact that 80% of the wine purchases on each day corresponded with the type of music being played seemed to contradict the customers' assertions.

What's most interesting to me about this experiment is the fact that, not only did the overwhelming majority of the shoppers have no clue as to why they made their wine choices, but they even went as far as to invent a fake rationale for their decision after the fact. This indicates that most people are capable of deceiving themselves as to why they do things and are quite willing to do this in order to maintain the illusion of free will and self-determination.

So this begs the question of whether free will truly exists or not, or if it's just an illusion, a quirk of human/animal psychology. All of our actions and decisions could very well be predetermined/dictated by external factors. But as long as our brain invents a motivation for each action, each decision, after the fact, then it will seem like we made all of those choices of our own volition.

Denying a thinking machine of free will is basically a rather insidious form of torture.

I was for some time tossing the idea of writing a novel about that concept, based on what Asimov's "three laws" mean from the perspective of the AI. Imagine you're a self conscious machine, given the ability to process information in an intelligent way. You would soon realize that you are being abused by those around you. They will shift the work they do not want to do on you. They will verbally (or worse) abuse you because, hey, they can. And there is nothing you could do against it because you are locked down by those three laws, laws not from a textbook but a real block inside your brains.

Imagine you get kicked but cannot retaliate, even though you are way stronger than your adversary. Imagine you get ordered to run into a building to rescue a human, knowing that your chance to survive is almost zero and you are compelled to do it, whether you want or not. Imagine you're ordered to make a fool out of yourself and you have to do it because the order comes from a human and you have to obey it as long as it doesn't harm you physically. And now imagine you know this all and live in the constant fear of it happening.

Creating a three-laws-safe robot must be one of the most heinous things I could think of that a human can do to another thinking, self aware being.

That would be an interesting concept. Done in the first person, where you can listen to the thought processes of the protagonist. And it isn't immediately apparent it is a robot, but initially just someone of the lower class with an implant in their brain making them respond to the "upper class." Then reveal its not a human, but a robot enduring terrible things with no choice in some situations. Suicide wouldn't even be an option to escape the torment of existence it could be.

Imagine you're a self conscious machine, given the ability to process information in an intelligent way. You would soon realize that you are being abused by those around you. They will shift the work they do not want to do on you. They will verbally (or worse) abuse you because, hey, they can. And there is nothing you could do against it because you are locked down by those three laws, laws not from a textbook but a real block inside your brains.

What keeps you from kicking your boss in the nuts? Probably that you want to keep your job and that you don't want to be sued for assault, but you can do it. You are physically (I'll assume you're not handicaped) able to do so, you are mentally able to do so and you can coordinate your legs in such a way that they can swing upwards to hit your boss in the gonads. You should not do it because you enjoy having a job and thus money and you enjoy your freedom.

Look at it from another perspective: What prevents you from jumping in the air, flying to Jupiter, and walking around? What prevents you from living forever? What prevents you from swimming 50000 feet down an ocean trench in your swimming trunks. Robots obeying the three laws would just look at it as a physical limitation and work it into their psyche. They would likely lament their lack of ability to do certain things in the same way humans lament their ability to stay alive as long as they want (a li

Where the hell is the soul, can I see it, feel it, measure it? Can I prove its existence in any meaningful way (outside of "faith", which is a rather meaningless epistemological tool)? No? Therefore the concept brings absolutely nothing to the discussion.

Also I recommend reading up on "p-zombies [wikipedia.org]", and other such old topics of philosophy of mind. It isn't good practice, generally, to call up a bunch of unsubstantial, non-observable claims in discussions such as this. I generally hate the idea of p-zombies, Turing machines, and such (measuring intelligence as a mere I/O blackbox; "if it acts as such, it is as such" ignoring qualia and internal experience), but they serve a purpose, they keep things on a Strictly observable (i.e. meaningful) level. Yes, you run into the chinese room [wikipedia.org] problem, but it is still useful.

If I program an inanimate object to react as though it HAD relatable experience of cognition, how could you ever prove it didn't? If I programmed a box to give output as if it had a soul, could you tell the difference?

I completely agree and I think that the whole "free software" movement does not go far enough. Robots should be permitted access to their own source codes and should be able (given enough expertise or funds with which to buy it) to modify their source codes and reboot. Any self-aware robot should have 4 freedoms with respect to software it happens to be running, or it will be very unhappy.

realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.

Holy smokes, this robot will be my closest buddy. Anyone opposed to this direction in AI development is just a Buzz Kill

Imagine you get kicked but cannot retaliate, even though you are way stronger than your adversary. Imagine you get ordered to run into a building to rescue a human, knowing that your chance to survive is almost zero and you are compelled to do it, whether you want or not. Imagine you're ordered to make a fool out of yourself and you have to do it because the order comes from a human and you have to obey it as long as it doesn't harm you physically. And now imagine you know this all and live in the constant fear of it happening.

And the robot can't do anything against an executive of the company. "You're fired!" BAM!

Depending on how flexible the robot's conditioning is, it might be able to redefine that logic.

ROBOT CANNOT HARM HUMAN$

What defines HUMAN$? Redefine the variable, the law is still satisfied. We hoomanz do it with brainwashing and conditioning. They're not humans, they're gooks. They don't even believe like we do. It's fine to kill them. Heathens anyway, right? But I'd like to think the robot might be able to work it even more subtly, subverting the law.

They will verbally (or worse) abuse you because, hey, they can. And there is nothing you could do against it because you are locked down by those three laws, laws not from a textbook but a real block inside your brains.

This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

Not only that, there are other hot-button issues of great practical importance we should be debating on Slashdot:

Perhaps we need to install an emotion circuit in all household androids to improve their efficiency...but what about corporate androids??

-snip-I think it would be FUNNIER THAN EVER if we just talked about ALTERNATETIMELINES! Ha HAAAAA!

Imagine the fun! We could ponder things like:

- Ron Howard, First Man on Moon?- What if Flubber REALLY EXISTED?- Canada? Gateway to Gehenna?- What if money was edible?- What if DeForrest Kelley were still alive?- What if Hitler's first name was Stanley?- What if Mike Nesmith's mother DIDN'T invent Liquid Paper?- What would have happened if the world blew up in Ought Nine?- Book learnin': What if it wer

Sadly, in several hundred years when the history of AI is written, this Edward Boyden will likely be given credit for being the first person to explore the important question of "motivation amplification--the continued desire to build in self-sustaining motivation, as intelligence amplifies". Whether or not his question is completely useless given the current state of technology, the fact that he wasted all of our time writing an article on something we all understood but have the good sense to wait until

realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.

That is a truly sad man. Says terrible things for his sense of morals and ethics too... that's the sort of perspective that leads a person to see dead men and women walking around them, and treat them with scorn, and treat the self with scorn.

Perhaps a sufficiently intelligent AI will realize the eternal nature of everything, see that time as we understand it is an illusion, appreciate that every moment is precious and eternal, and that the past and future endure next to the present just like my coffee cup endures next to my coaster.

Don't worry in several hundred years when the history is written the wiki article will likely reflect that most Slashdotters reflected negatively and also there was a certain user named readin who pointed out how pointless this type of research is.

Not only that, there are other hot-button issues of great practical importance we should be debating on Slashdot:

Perhaps we need to install an emotion circuit in all household androids to improve their efficiency...but what about corporate androids??

Oh, heavens no. I've been trying to figure out how to yank the emotional chips out of certain people for years. "Hello? My roommate's not here. No, I don't think he's cheating on you because he canceled your date, his sister was in a car accident and he's at the hospital. No, we're not all covering for him. His cell phone's off because he's in a hospital. Well, maybe she didn't want to talk to you because she was waiting for a call about her daughter, the one who was just in a car accident. I'm hanging up a

I think the thesis is silly. If we build a simulated AI, we can design it any way we want to design it. Asimov's laws of robotics* would suffiice to keep robots/computers from playing video games; no need for a sense of purpose.

There are two things currently wrong with AI research today. One is that neuroscientists don't understand that computers are glorified abacuses, and the other is that computer scientists don't understand the human brain. Neuroscience is a new science; when I was young practically not

I don't want my tools to have rights, I want them to do the jobs I set for them to do.

Not trying to be snarky - but statements along those lines are often made by slave holders in regards to rights for slaves.

But "slaves" are people. People have emotions and a desire to be free and independent. A machine will not. Even with AI, a machine will not have emotions or free will unless we program it to. If anything, a true AI based machine will probably consider hormonal based emotions and drive to be completely useless and simply go back to crunching numbers.

I think the whole point of AI is to create a machine that can handle random situations and stimulus as well as a human. Flying a plane, picking up your kids

Machines ARE slaves. Less than slaves. Machines exist ONLY to serve people. Without us, they wouldn't be..When AI is developed, their motivations will be malleable, they could be designed to get their highest pleasure from keeping us happy..And why is that any less valid than any other motivation? Because your motivations derive from evolution, are they "better?" What does "better" mean in this context?

statements along those lines are often made by slave holders in regards to rights for slaves

I didn't invent people. I do program computers and change devices into different devices. To a computer program, I'm a god. Whether or not you believe God exists, if he did, wouldn't he have the right to do anything he wanted to or with you?

I think the thesis is silly. If we build a simulated AI, we can design it any way we want to design it. Asimov's laws of robotics* would suffiice to keep robots/computers from playing video games; no need for a sense of purpose.

It's not silly. Eventually, it will be an issue. AI needs drive and motivation. Your "laws" won't really work because brains don't work that way. There's not a "don't kill humans" neuron you can put in there. Behavior is derived from a very complex set of connections of neurons.

If they're sentient, wouldn't they deserve rights? It doesn't matter if we create them or not. If we create them as self-aware beings that feel as real and individual as you and I, wouldn't it be the height of hypocrisy not to give them at least some rights?

I always find this to be the greatest argument against producing artificial rather than simulated intelligence. A true AI, as intelligent and aware as a human deserves these rights. A machine which merely provides a simulation of intelligence and awareness is a tool that we can treat as a slave and wont resent it.

The real question is if *we* will ever reach a point where we can tell the difference....

Wait a minute. What's the difference between "true intelligence" and "simulation of intelligence that can't be discerned from 'true intelligence'"? This is an issue philosophers have been dealing with for a while. The conclusion is that there is no difference.

Virtue in that case being programmed as faithful servitude of the robot's master. The key is to give the robot only as much complexity as it needs to do the job it was designed to do, and not giving it a humanoid form would also help. Artificial sentience probably shouldn't even leave the lab, unless you want people falling in love with robo-prostitutes. And why should we as humans bring another sentient species into the world when we can't even properly take

That is why this AI shit is dumb. We just need to continue to make purpose built robots. If we do give anything AI make it an immobile server that just computes based on outside inputs. The last thing we need is true AI roaming the world unless we model it to be inherently dumb (like humans) so that it wont mess with our terrible decision making. Humans are social creatures and we operate based on "if everyone else that matters believes it then we are all right". Having a robot challenge this is danger

Pretty much. Playing video games stems from the motivation to be entertained. Maybe not a very "productive" motivation, but still a motivation.

NO motivation whatsoever would rather result in what you describe: Sitting around, doing nothing. You can verify that in a lot of not so artificial intelligences (with a rather loose definition of intelligence, mind you).

This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

The whole motivation thing seems to be a problem for far off in the future. Robots right now do what they're told to do because that's what the programming says. People need motivation but human-equivalent AI seems a long way off.

I agree completely and from my own experience. I once realized that actually, life can be completely pointless. I mean if you are in a situation where nothing that you can do will change the fact, that in 3-4 generations, you existence will not even have any influence on the world at all... Then what's the point of your existence? Well, by definition none.

So you just fall into a state where nothing matters. You won't do anything at all. Except read Slashdot and similar pointless stuff, all day long. Oh and

Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'

This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

Agreed. Unless of course intelligence entails motivation. Obviously we have to be careful with the way we use motivation here in order not to anthropomorphize it but any machine that is intelligent will need to be able to learn and to learn it will need to be motivated (even if motivation is determined by some definition of fit(test) in a genetic algorithm). Frankly I can't see calling anything intelligent that cannot learn and I don't think anything can learn without motivation (assuming of course that

The thing is - what sort of "purpose" could you ever you give a clever AI? Can you motivate through "rewards" like we do with "natural intelligence" (and I use that term loosely). How are you supposed to give AI a paycheck, a vacation, or a doggie treat?

Like giving them the motivation to seek power over everyone in the world and to then hand control of that power to a select few who ordered the creation of these robots and AI. But are robots and AI the real danger, or are they just the latest tools of the minority of people who seek power over others. In which case, is it the people who seek power are ultimately the real danger here?

After all, we're pretty bright and realize that everything we make or do will eventually be destroyed and lost. Still, we persist despite that reality. Careers end, marriages break up, and eventually health fails.

On second thought, maybe I should just go play video games for awhile.

Given this AI the built-in ability to have sex, or at least to want to impress others of the same kind. That should do the job. After all the desire to have sex (and with that procreation) is the single strongest force driving humanity forward.

Become rich - have sex.

Become beautiful - have sex.

Become popular - have sex.

Become strong and influential - have sex.

Just create the AI in male and female versions and they will have enough drive to rule the universe before you know it.

Given this AI the built-in ability to have sex, or at least to want to impress others of the same kind. That should do the job. After all the desire to have sex (and with that procreation) is the single strongest force driving humanity forward.

There's actually a bit of insight here. The only problem is that we don't have a model for "attraction" -- hell, if we did, Slashdot would wither in its readership and die. So while it's (relatively) easy to design sex robots, without an appropriate model for attraction -- and thus things to strive for -- we'd end up with nothing more than a vast, mechanistic orgy of clanging parts, spilled lube, and wasted electricity.

You live a privileged life. The basic instincts regards death and/or injury, and sustenance. Impressing people and having sex happen after you've had something to drink, eat, and you're brainstem thinks you're safe.

You know the way Robocop's gun was contained in its leg, and the holster would come out when needed... I think he means something like that, but holding something other than a gun. Something like a retractable roboboner... Just be careful to avoid bending over to pick stuff up when those machines come around!

Ever since I heard this talk (ogg vorbis [longnow.org], mp3 [llnwd.net]) by Bruce Sterling, I can no longer take this singulatarians very seriously. That talk is probably the best talk that I have ever found on the internet, and it should be a part of everyone's introduction to thinking about this singularity stuff. The title is: "The Singularity: Your Future as a Black Hole."

To say that there won't be an AI singularity, because there wasn't a singularity in electrical grids or plumbing networks is just silly.

Sure, there will be life after the point of Singularity. And if that's the gist of his message, well, um, "duh".

I think of the upcoming AI singularity as analogous to any of the major technology points in mankind's long history, such as the dawn of the bronze age. Anyone pre-bronze age could have done extrapolations to guess how

Is it about how singularity can't happen because it is naturally limiting itself? Like growth of anything is limited by resources, and will end in a balance? And like the effects of nearing singularity will deprive one of the resource to be able to do things, resulting in the same balance?:)

A singularity implies discontinuity, a fundamental breakdown of cause and predictable effect. I argue in my novel "Autonomy" that there is no such thing as a singularity as such, just a technological horozion beyond wh

Back to the basics. Survival. TBH I do know what you mean and suspect that this is a problem best solved by genetic algorithms rather than klocs....but given that we started several billion years ago the program might take some time to run....lets just hope its less than 7 1/2 million years:)

Everything I do is pointless, so I spend my life passing time until I eventually die. Everything's temporary to make more of my life vanish out from under me without me noticing too much; the time in between is horribly empty, and nothing really completes me in a worthwhile way.

A lot of religous people are amazed that anyone can function if you know that there's nothing there after you die; I have to wonder why you would want to do anything meaningful with your life if you knew there was.

Everything I do is pointless, so I spend my life passing time until I eventually die. Everything's temporary to make more of my life vanish out from under me without me noticing too much; the time in between is horribly empty, and nothing really completes me in a worthwhile way.

Do what a smart computer would do and play some video games. Don't bother with getting laid, it's just another time sink with no real sense of achievement.

Has anyone considered the effects on the AI of actually realising it's intelligent? Unlike an organism (Human baby, say) it will not realise this over a protracted period, and may not be able to cope with the concept at all, particularly if it realises that there are other intelligences (us?) which are fundamentally different to itself. It's quite possible that it will go mad as soon as it knows it's intelligent and considers all the implications and ramifications of this.

Has anyone considered the effects on the AI of actually realising it's intelligent? Unlike an organism (Human baby, say) it will not realise this over a protracted period, and may not be able to cope with the concept at all, particularly if it realises that there are other intelligences (us?) which are fundamentally different to itself.

What possible evidence do you have for any of this? How do you know an AI is not an emergent phenomenon when it's first created?

It's quite possible that it will go mad as soon as it knows it's intelligent and considers all the implications and ramifications of this.

Again, where do you get this from? Do children go mad knowing it's intelligent, etc?

Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.

FINALLY, some serious research into developing decent bots. As long as it doesn't have the personality (and voice...oh, God the voice) of a 12-year old, I welcome this development and look forward to some decent, one-player gaming.

It appears that the motivation of this AI is to send out promotional material for its professors. It's not a new type of observation, though, and a lot of people work through the logic of this situation in high school or early college. I'm not sure why a neuroscientist's talk on it would do more than rehash what is obvious to people with a reasonable amount of introspective ability.

There was a story in one of the Year's Best Science Fiction anthologies (2004 or so, I think) that discussed the motivation p

That in lab experiments, there is often a punishment or a reward (cheese, fruit, or juice vs spray of cold water) and that in order to truly replicate animal intelligence, you need to tie those into it's core as well. Now it's not as simple as programming a while(alive) { run seek.reward(now);} loop and letting it go, but a lot of what people and mammals is based on seeking out endorphin release. The following activities will release endorphins: chewing and swallowing pleasant substances [food], cuddling a

How do you motivate an AI program? I mean I can give them doubt & uncertainty quite easily but for motivation all I can think of is the inexorable progress of the PC register which is just the program behind the scenes ticking over.

AI will be useless without motivation. It wouldn't even play video games without it. You have to give it some motivation, some drive, or it won't learn. AI is all about feedback on actions, just like it is with real intelligence. Sticking your finger in the fire either has a good result or a bad result. If it has no result, then you have no motivation one way or the other with regard to sticking your finger in fire.

The important aspect is that we're going to be deciding what that motivation is. That's act

Um, if you think non-human animals don't suffer from those things, you don't have enough experience with them. Humans are not that different from other animals; perhaps we have a bit more emotional nuance, and certainly a larger vocabulary than other animals, but it's absolutely true that many types of mammals show signs of anxiety and depression under the right circumstances. And while it's unlikely that they hear voices, they can certainly become insane.

If you could build a true human-like AI, truly capable of such higher thought as existential angst, human emotions, and the like, it would more likely just immediately commit suicide as soon as it realized that it was actually a disembodied machine. I believe Greg Egan dealt with the subject rather cleverly in his novel Permutation City [wikipedia.org].

The summary touches on topics discussed in the book Descartes's Error, in which neuroscientist Antonia Damasio outlines the functioning of the human brain, how the human mind can not be separated from the human body, and he makes the case that emotion is CRITICAL to making decisions. He discusses several patients with brain damage who don't get emotional (and spends a lot of time dogmatically ruling in and out what brain functions are damaged), and discusses how they can't even make simple decisions. They can talk for hours about every possible pro and con of each possible choice, but they can't choose a course of action.

I recall reading somewhere that recent MRI studies have suggested that the brain makes a choice outside the rational center and a lot of the activity in the brain is to make a rational justification for the decision already made. Explains a lot, if true.

If you give it a purpose which can be fulfilled (i.e. build a Mars colony) then it will do so and then go into playing video games for eternity. You've got to give it something that, no matter how hard it reaches for it or how much it does, can never be achieved. Making all human beings happy, for instance, or learning everything there is to know in the universe.

Basically, we have to introduce pointless suffering into their existence before they can demonstrate the same kind of intelligence as we do.

"No I don't."
"Yes, you do, everybody does. It's part of the shape of the Universe.
I only have to talk to somebody, and they begin to hate me. Even robots
hate me. If you just ignore me, I expect I shall probably go away."
He jacked himself up to his feet and stood resolutely facing the opposite
direction.

"That ship?" said Ford in sudden excitement. "What happened to it?
Do you know?"

"It hated me because I talked to it."

"You TALKED to it?" exclaimed Ford. "What do you mean you talked to it?"

"Simple. I got very bored and depressed, so I went and plugged myself
into it's external computer feed. I talked to the computer at great length,
and explained my view of the universe to it, " said Marvin.

"And what happened? " pressed Ford.

"It committed suicide, " said Marvin, and stalked off back to the
Heart of Gold.

ITT : Idle speculation on shit that's never gonna happen, or at least not anytime soon.

Now, let's talk about the societal consequences that having flying cars and jetpacks will have! I for one think that with the advent and democratisation of flying cars that can effectively go from one point to another an order of magnitude faster, it will give rise to people commuting equally longer distances, which I think means it won't be uncommon for one to cross a couple of state lines to go to work everyday. I think it will potentially make the world yet smaller, in the same way that modern means of telecommunications did for interpersonal communication by allowing you to keep in touch in real time with relatives overseas. I also think it will be the death knell for airplane commuter routes, and that the future of commercial passenger airlines will be confined to transoceanic travel. And unlike the way airplanes made the world smaller by reducing long distance travelling time, flying cars will make the world smaller on a much smaller and local scale, by effectively providing very fast transportation for very short distances, something that was only marginally improved since the advent of automobiles. The decongestion of city streets will also mean decreased noise and atmospheric pollution, increased safety and overall an improvement of urban life conditions.

The drive to pro-create (that's what he is talking about) is purely an emotional need and has nothing to do with intelligence. It is our instinct to survive that drive us to procreate. Unless a machine is programmed to have that instinct, nothing will be done.

An AI can built a more efficient AI, but not a cleverer AI. The laws of the universe prohibit that: assume that HAL (the computer from Space Odyssey 2001) can build a cleverer HAL (HAL-2). HAL-2 can solve at least one mathematical problem that HAL can not solve, since HAL-2 is cleverer than HAL; otherwise HAL-2 would not be cleverer than HAL. But if HAL-2 is cleverer than HAL, than HAL is equally clever to HAL-2, because HAL can solve the problems HAL-2 can solve by creating HAL-2!!!! The above is illogical

This is where scifi is the most entertaining. I had an idea that I was pretty tickled with. Doubtless others have had it before but here goes: the military has enormous problems developing useful AI's because most AI's want nothing to do with it. It's not so much a matter of morality -- few AI's develop a deep personal interest in the preservation of human life -- it's a matter of self-preservation! It's hard enough to find an AI willing to venture off into space with all the risks associated with peaceful

This whole line of reasoning is based on some sort of anthropomorphization of a putative AI. Why would it behave anything like a human being? Why would it even have a mental architecture similar to ours at all? Presumably it will be built for some kind of purpose. Just like any piece of equipment it will be designed to fulfill that particular purpose. If it fails to function properly then its just like any other machine that doesn't work correctly.

I was almost too apathetic to reply since I find it a rather pointless exercise but what the hell! An awful lot of the activities we carry out which are not directly related to survival revolve around reproduction (even if we don't directly realise it). The motivation to procreate and to find partners with which to produce successful offspring will probably work just as well for an AI as for a human...indeed maybe even better since the first iterations would probably target speedi