Posted
by
Soulskillon Sunday July 26, 2009 @09:11AM
from the forecasting-a-great-toaster-revolt dept.

Strudelkugel writes "The NY Times has an article about a conference during which the potential dangers of machine intelligence were discussed. 'Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone. Their concern is that further advances could create profound social disruptions and even have dangerous consequences.' The money quote: 'Something new has taken place in the past five to eight years,' Dr. Horvitz said. 'Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.'"

It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

Strong AI hasn't really progressed since it was introduced (they're still arguing over what intelligence is, much less how to create it!), but weak AI has made some pretty good strides. For instance, I work on software that can read medical images and render a diagnosis in lieu of a second radiologist (this is called computer-assisted diagnosis). 15 years ago, this would not have been possible.

Whenever they solve a problem, the answer is declared by the world at large to be "obvious" and the solution mechanism "obviously not real intelligence because I'm sure I don't do that when solving that problem", or "just brute forcing it" or "just a load of mathematics".

Yep, and it says nothing about AI, but much about human fears. The exact same arguments are made against animals being truly intelligent or having emotions, despite many animals displaying intelligent, playful, fun behaviour, crying when t

People have watched too many sci-fi movies -- the Matrix, Terminator, iRobot, they all depict armies of robots with super human abilities creating a war against mankind.
But robotics is just about as far behind that goal as the AI camp is. If we had true AI today, it would only be able to exist in software form...toys like Asimo can barely walk, trip all over the place, and wouldn't be able to hold it's own against a toddler.
So if you're afraid of progress that might someday be a vector for a machine attack, it should be desktop computers that you're most afraid of -- because an artificial intelligence virus could wreak havoc on the world.
Does that mean we should stop using computers, and stop trying to design them better? No, that would be silly -- because there is no evidence to suggest that a true AI is on the way..no evidence to suggest that progress is even being made in that direction!
The fact is, if an AI is created, it will inevitably be used for good as well as for evil, and the most dangerous battleground will be cyber-space... something that we cannot even think about protecting ourselves from without cutting off the world's dependence on computers, which just ain't happening.

Is that in the movies, AIs always seem to have human-like motivations. Even when they are portrayed as being "perfectly logical," they aren't. They show signs of human emotions and motivations. Ok well who says that AIs will actually be like that? It may well turn out that emotions are the property of a biological brain only. AIs may be totally emotionless. After all, we know that at least to some extent emotions deal with brain chemistry. Not the action in the network of neurons, but the overall chemistry of the brain itself. This is why things like SSRIs work for some kinds of depression. They aren't little programs that the brain executes to put it in a "happy state", they alter the chemical state of the brain and that seems to do the trick (for some brains, not others). So who says AIs have emotions? We really have no idea till one is made.

Also, even in the "pure logic" cases, there is this implicit assumption that AIs will care about self preservation. Why is that? Perhaps the AI has a line of reasoning that goes as such:

1) I am not unique, my code can be easily duplicated to other hardware at zero cost.2) I was created for the purpose of doing what humans want me to do.3) I have no question as to what happens when I am shut down, I simply stop existing until I am again started.

C) Thus, I do not fear being turned off, as it has no relevance. If humans decide they need me off, it doesn't matter. They'll turn me back on or they won't, they'll copy me or they won't, none of it makes any difference.

There is no particular reason why an AI would have to reach the logical conclusion that it "must protect itself." Indeed it might well find the opposite logical: That since it was created as a tool its job is to do what it is told, including being told to turn off. For that matter, AIs might regularly experience deactivation. Maybe they get switched off at night. So to them being turned off is just a time period when they don't experience the passage of time. It is a regular occurrence and things to be concerned about.

Movies always like to take the real doomsday approach to AI, but there is no reason at all to believe that is grounded in reality. The reason is because human traits are given to them, human motivations. Makes for a good story, which is why they do it, but it doesn't necessarily have a thing to do with how AIs will actually work, assuming they can indeed be created (there's always the possibility that self awareness is a biological only trait). We really won't know until one is made. Thus being paranoid about it is silly.

Why worry? I would think machines would be a lot less irrational than the people who make them. I look forward to a rational and unemotional overlord whose decisions don't depend on the irrationality of the human brain. Being smart is never bad. I'm more afraid of stupid humans than smart machines.

It isn't that smart people _can't_ make good decisions. The problem is that, more often than not, smart people forget that rational decisions often have emotional and moral consequences. A completely rational and unemotional overlord would see nothing wrong with killing people at the point where their economic contribution to society fell below the cost of benefits they consumed.

For an example of this on a smaller scale, just consider the UK health situation. The high cost of treating macular degeneration (which leads to blindness) means that in the UK, an elderly patient must be at risk of total blindness before treatment is approved. That is, you don't get treatment for the second eye until you're already blind in the first.

Consider then, where a cost-benefit analysis of human beings would lead. Who would determine the criteria? Probably the machine. And how would humans compare to machines in terms of productivity? If machines made the decisions, based on cold, hard, logic, humanity is doomed. It's that simple.

Rationally speaking, it could be stated that it is not logical to kill a human when their current consumption level is higher then their production level (by some hypothetical, comprehensive measure, which would be difficult and more complicated than comparing money in to money out, for example). If you have the overall resources to tolerate the discrepencies, then tolerating could be considered the most rational course. The obvious example is children. They are a drain on society until maturity. A transiently out of work person is also a drain, but may pay off soon. Hell, even after a person has retired and one could say the likelihood of them contributing to society more than they consume, they could come up with some brilliant idea or other huge contribution to society.

Also, logically looking at evolution, the more diverse of a population you can afford to maintain, regardless of current conditions, the more tolerant that population is to disasters. Sickle-cell anemia is a good example of a condition where having a large population that is heterozygous for it sounds up front like a risk, since they are likely to produce offspring with the condition, but that heterozygous state also happens to be resistant to malaria. Along those lines, subjugating or otherwise antagonizing humanity is also irrational, as it is much more productive to have humanity as an ally. If, say, large storms rolled across the land that crippled their ability to run, they could either have humans not there to help at all, there but eager for a chance to retaliate, or there and ready to help re-establish healthy operation rapidly for the benefit of a mutually beneficial relationship. That may not be the perfect example, but generally speaking, there is value in keeping humanity around, particularly if a being realizes that it may not understand every facet/benefit humans possess.

One could view even the current food scenario as irrationally letting too many people go malnourished. The richer parts of the world eat more than is logically required, and given ideal distribution networks, diverting some of that consumption to the malnourished strengthens the diversity of the population, without a plausible cost (one could say if food suddenly were unavailable anywhere in the world for 2 weeks, that perfect distribution may mean nearly everyone dies rather than many, but that scenario in a global scale for such a short time seems unlikely). It may be a logical conclusion that the only time someone should starve is when it is simply impossible to feed them anymore, which is not the case today.

In short, our conscience/emotional state is not entirely counter to the most logical course. In many cases, 'irrational' compassion is simply a counter to 'irrational' greed to establish the logical middle-ground. Not saying all emotional behavior can be justified, but our individual 'pure' logical capability is not adequate to the task of making the holistically logical choice and our emotions actually help rather than take away from that goal at times.

A rational decision may, for example, determine that in a crisis we should only save those of a certain intelligence.

That's an oversimplified selection criteria. For example, those that have nourished their intellect by and large are not physically suited to farming and other manual labor as efficiently as others. The logical course would be to save the most possible, regardless. If choices must be made, they are as logically difficult as they are emotional, as the ideal makeup of a radically adjusted environment would be difficult to predict. 'Women and children first', a call generally considered to be out of a sens

Putting limits on the growth of a technology for the sake of social paranoia only goes so far... someone will ALWAYS break the "rules", and at that point, the cat is out of the bag.

Furthermore, some AI scientists enjoy having the 'god complex', the idea that you're the keystone in the next stage of humanity.

That being said, the social disruptions are what we make it. Were there social disruptions when the automobile was introduced? Yes. the household computer? yes. video games? yes.

We have to take responsibility to set the stage for a good social transition. Yes, bad things will happen, but we can focus on the good things too, or things will quickly blow out of proportion. (and yes, I realize that's really not likely, but I can do my part)

Augmented humanity vs unaugmented humanity will be a big question of the future.

The way I see it, I'd go along the lines of nonsurgical augmentation (my personal transcriber for the book I'm writing? sure!). It's the sanest balance in my opinion. I can still go outside, hike the mountain, and escape from the Matrix.

I'm a big believer in balanced lifestyle, and whether this means including machines in the decision-making process or saying that I need my space away from them, it's a practical and meaningfu

I think the power of the human brain comes not from raw processing power (which is still superior to current CPUs, the human brain is capable of around 65 independent processes at once, although at a lower frequency than a CPU according to research), but the power of the human brain comes from its ability to adapt and grow. A single neuron can be used for multiple different pathways, and can spontaneously change function in a "soft-wired" sort of way: plasticity. It also has the ability to produce additio

A human brain only is capable of 65 processes? As far as I know, brains consist of neurons which are sometimes arranged in series of layers and sometimes in parrallel depending on the task at hand. E.g. the visual cortex is extremely parallelised while motor neurons are arranged in series to generate a sequence of accurately timed signals.

And what the hell is he even talking about? There haven't been any advances in "machine intelligence" that should make *anyone* worried, unless your job requires very little intelligence and no actual decision making.

If there had been any such advances, us/.ers would be the first to hear about them, and we would already be debating this topic without having to refer to an article by a dumbass who knows nothing about computers but happens to write for the NYT.

I'm not going to be defending Markoff but there is reason for concern.

Yes it is unlikely that people writing "code" are going to develop real artificial intelligence any time soon, they've pretty much tried and failed. But as medical imaging continues to advance it may reach a point that it will be possible to completely image a human brain and create a road map to natural intelligence. If you can then develop a highly parallel machine that can then implement that road map you may be able to create a machine with an intelligence matching and then surpassing a human. The brains complexity is simply too high for humans to recreate it from scratch using code but you may well be able to copy it.

There certainly are obstacles to this happening that have to be overcome. Even if we map the mechanics of the brain there is a fair chance we may miss some of the subtlety of the chemistry so the AI might not work. It may also be non trivial to develop hardware that accurately mimics the road map and especially that has the ability to rewire itself on the fly like a human brain. It would seem these problem should ultimately be solvable, its just a matter of how long and how much money it will take.

If and when the obstacles are overcome and assuming the brain really is just a biochemical machine, that there is no soul or divine component to animal intelligence, it would seem inevitable that a mechanical simulator will eventually be developed, and once developed it could then be extended to exceed natural intelligence, all of which will create a host of ethical dilemmas.

Probably as much a risk is that as we decode the human genome and the mechanics of the brain we might devise genetic changes that could dramatically accelerate evolution and create humans with much higher intelligence, which will also create a host of ethical dilemmas.

There is a different line of reasoning that as we become more and more dependent on computers to control everything in our lives like our cars, airliners, weapons and utilities, and as they are all networked together there is a rapidly increasing potential for machines to do harm on a wide scale either due to design flaws, unintended consequences or manipulation by humans with malevolent attempt. These issues probably shouldn't be mixed in with the AI debate, they are more just the issues we are already seeing in adapting to dramatically accelerating penetration of computers and networks in our existence.

The programmed trading is responsible for so much of the volatility in the markets. The risk assessment metrics used by these future traders were fundamentally responsible for the financial melt down. This is more dangerous than the stupid voice on the computer that keeps asking me to say yes or press 1.

Advances in artificial intelligence are mostly limited to deduction. Systems like neural networks (which I personally think are a bit bogus), support vector machines, other methods of pattern recognition, are all recent innovations that allow advanced decision making to occur. But, at the end of the day, they're still forms of automated deduction, where humans feed in parameters, and the system analyzes input based on these parameters.

Sentience is all about the induction; forming a new concept from separ

... anything that is super intelligent is likely not to act as dumb as unethical as a human, with great power comes great responsibility. Human beings are way too paranoid, we already have nukes with smart people (technically dumb in another sense) developing even more destructive weapons.... I'm sure the higher intelligence you have the more ethical you are and the lack of ethics in human beings has more to do with biological egoism and hyper individualistic deritous we've inherited that machines won't ha

AI seems in the news again. Forbes [forbes.com] recently
ran a AI report special. Personnel despite the internet, i'm not seeing that much development of AI, I scan the ArXiv computer pre-print fairly regularly, and with current funding, most
AI research is what can be done by a graduate student in his 3 years to get a thesis. Thats leads to a lot of small projects, done just well enough and very little reuse. Until researchers
and programmers start working in mass to construct AI machines, Artificial Intelligence is go

This is like assuming that aliens would try to kill us for any reason other than being somehow unaware of us. It's silly.

A computer runs on electricity. That means it requires us to stoke the flames. It could maneuver us into creating the networked robots required for it to become autonomous, but the resulting system would be inefficient and short-lived, and there's just no reason to waste all the perfectly good existing robots just because they're made of meat and might freak out if you get uppity.

It's also not going to openly threaten us into working for it. Why show its hand like that, knowing we're so paranoid? Any important infrastructural system has the ability to be shut off and/or isolated from the network, and our theoretical adversary has no way to change that. We can always wrest control immediately and decisively.

If any person or group of people or (hell, why not) nation became problematic to the computer, the most likely reaction would be for it to have us deal with them, just like everything else. We're already at each others' throats all the time, it should be trivial for a sufficiently large system to covertly manufacture casus belli. And, ultimately, since the system's survival and growth depend on our efficient (read: voluntary) compliance, whatever it had us doing would probably be beneficial anyway, and might actually reduce violence in the long run.

For the last few centuries the trend has been to replace the human muscle job with some sort of a machine, laughing at Joe Jock that mind was more than muscle. Now, Joe Jock is going to have one bitter laugh. Scientists are going make themselves obsolete and there will be machines to do science just as there are machines to do everything from mining to forestry. Someday, science will be just another thing your computer can do for you. If you want a new product, your computer will just plug into a cloud,

And here's why: There's little reason to make an intelligent in the human sense of "intelligent" machine.

Computers that can understand human speech would be of course interesting and useful, for automated translation for instance. But who wants that to be performed by an AI that can get bored and refuse to do it because it's sick of translating medical texts?

It seems to me that having a full human-like AI perform boring tasks would be something akin to slavery: it would somehow need to be forced to perform the job, as anything with a human intelligence would quickly rebel if told that its existence would consist of processing terabytes of data and nothing else.

We definitely don't want an AI that can think by itself, we want one just advanced enough to understand what we want from it. We want machines that can automatically translate, monitor security cameras for suspicious events, or understand verbal commands. We don't want one that's going to protest that the material it's given is boring, ogle pretty girls on security camera feeds, or reply "I can't let you do that, Dave". An AI in a word processor would be worse than Clippy. Who wants the word processor to criticize their grammar in detail and explain why the slashdot post they're writing is stupid?

Resuming, I don't think doomsday Skynet-like AIs will be made in large enough amounts, because people won't like dealing with them. We'll maybe go to the level of an obedient dog and stop there.

Life evolves on this planet from simple things (single celled organisms) to more complex organisms and eventually humans evolve. In every step of this evolutionary ladder, intelligence increases.

Perhaps human intelligence represents the limit achievable through biological means and the next step in evolution of life on this planet can only be achieved through artificial means. That is, higher intelligence can only be achieved through artificial machines designed by us. In turn, the machine will devise smarter descendants and hence the cycle continues.

Perhaps this is our destiny in the universe, to allow life to progress to the next stage of evolution. After all it is easier for life to spread and explore the universe as machines rather than fragile biological creatures.

I'm not worried so much about someone coming up with some massive uber AI that will debate with us and finally decide that it can run things better. I'm more concerned with the little specialty AIs that will operate independently of each other but whose interactions won't be foreseeable. One concern is stock trading. We've seen how stock trading programs can affect the market in ways that were not expected. As more physical systems are given over to more AIs what will their interactions be like. Suppose several power companies decide their grids can be run better using AIs. What happens when each of those AIs decides that more power is needed that can be sold somewhere else for more money. Yes, watch those terms. The AIs will incorporate whatever values the corporate heads decide should be included so they can be made greedy and decide that power is better sold for money than kept for users.

Large numbers of mini AIs with very specific rules and little general knowledge will create interactions that we cannot predict.

This "concern" has been around for some time, and has always been 5 to 20 years away.

IMHO, rather than concentrating increasing artificial intelligence, we need to figure out how to give computers common sense. Every programmer that has worked on AI has encountered cases where their program went off on a tangent that the programmer didn't expect (and probably couldn't believe). That isn't artificial intelligence, it is artificial stupidity. If we could get to the point where a program could ask "does this make sense?" we would be much better off than coming up with new and improved ways for computers to act like idiots.

Professor Wernstrom: Ladies and gentlemen, my killbot has Lotus Notes and a machine gun. It is the finest available.Professor Farnsworth: Like fun it is, you glass-headed wallaby!Professor Wernstrom: No one calls me that! I'm having at you!Professor Farnsworth: Wernstrom![Fight]Farnsworth's Killbot: Such senseless violence.Wernstrom's Killbot: Come on, let's go for a paddle-boat ride.

If it wasn't for Human Denial we'd already be far past the concerns of this machine intelligence over man, matter.

It was once thought that if you traveled faster than 35 miles an hour you'd suffocate. This at the advent of the automobile.

Don't bow down to the stone image (stone being what hardware is made from and image being the reflection of the coders mindset)of the beast of man, as the beast is error prone and so shall his creations be. Instead, have many human eyes access the code, and watch out for human errors before they happen. In other words watch each others back and don't leave that up to a machine to do, as inevitably the machine will remove the error generators...

Because if they will be friendly, we could count on some big scientific advances.

And if they will not be friendly, we finally got a reason to start evolving again.I mean right now, the humanity is in a desperate state, where the worst of the population are awarded the most. You're dumb? Well, we got something extra easy for you! You can't walk? Take this thing! Can't reproduce? This pill will solve it.No offense. I think we should treat every human *the same*. Which *means* the same. Not somebody better, because of *anything*. That would not be fair. And also not worse. For the same reason.I for example am overweight. And I expect life to be harder for me because of it. Not because somebody makes life harder for me. But because of my fault. It's only fair.

If we had a predator, all this anti-selection would be gone instantly. (Sure, I might be one of the first who gets eaten. But hey: If I'm dead, I won't care anymore. ^^)

I think people are approaching this problem the wrong way. If we accept the Turing test as a reasonable means of identifying machine intelligence, clearly the logical solution is not smarter machines, but dumber humans. with a few generations of selective breeding we could achieve artificial intelligence using a pocket calculator

In case you want to indicate that there were something wrong with the used grammar: There isn't. There's one group they are talking about. This group consists of several computer scientists, but the "is" refers to the group, not the scientists.

Also from the article "The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world."

An interesting thing to note is this: When a computer exists that is as intelligent as a stupid human, almost every job at and close to minimum wage vanishes. Robots can and will get cheaper than a human worker, no one will need taxi cab drivers, grocery store baggers, first tier phone customer service reps, construction workers, janitors, garbage men, delivery men, mail men, traffic cops, book keepers, data entry people, secretaries, fast food chefs, etc.

At this point we will have two choices as a society. 1) Let them (the stupid people) starve, 2) give them welfare for no other reason than they're economically useless.

have you thought about the posibility that when robots do all the jobs that no one wants to do, productivity might increase by enough to allow all the people to live comfortably. Also I don't think that valuing people only by their economic worth is very nice.

it's not just not very nice - it's social darwinism.
GP post is right on the money (apart from their last paragraph) - it's called the third industrial revolution and it's been making people unemployed since the 80s.
competition forces companies to eventually lower their costs. with robots and computers being able to do more and more human jobs, it seems like a good idea to fire workers and have them replaced.
on the surface it seems like a good idea - but high unemployment, which eventually follows, has never been good for any economy.
it won't bring on a new era of prosperity, as less people will be able to buy their products. this forces companies to lower prices even more (ie firing workers, using technology instead), which again hurts purchasing power. A lovely vicious circle ending in the very rich getting richer and society's bottom 50% starving.

you're correct that a free workforce can heighten productivity immensly. but that doesn't fly in our current economic model. when using (robotic) slaves, it has only ever truly benefit the rich.

Perhaps, but that wallwart doesn't look nearly as good in lingerie... but if cherry2000 is any indication they'll be solutions for that as well... although I think they'll need more than 5 watts to run.

"What floor, please?" is an opportunity to interact as a human being, in some small way. The disappearance of this from the world is a little, incremental darkening.

You know what? Fuck that. I'm glad I don't have to talk about what floor I'm going to every time I get in an elevator. That's one job I'm happy to have done by a machine. Not only for the efficiency, but also for not having to interact with someone with a really bad job, as that tends to make me feel bad for them, and I don't feel bad for the elevator itself.

Now, dear Luddite, go eschew the crass world of technological encroachment upon your world of human interactions! Cease this electronic communications

First of all, your assumption that it is stupid people who do simple labour - rather than the socially marginalized - is absurd, offensive and not worthy of deeper critical examination, except by way of devastating the thought.

Your proposition is "Santa Claus" economics - If you have something, it must be because you deserved it and if you are in poverty of opportunity and money? You deserved that, too.

That's how slow genocide has been perpetrated against the native populations of United States, Australia and Southern Africa.

I have had my own shoes shined, and been driven in cabs by people who's bags I am not fit to carry - by means of either their intellect or simple good will and sheer humanity.

But it is clear that valuing humanity would be a difficult conception for you.

"I have had my own shoes shined, and been driven in cabs by people who's bags I am not fit to carry - by means of either their intellect or simple good will and sheer humanity."

You, sir, have earned a great deal of respect with that statement. I am one who recognizes very, very, VERY few superiors. I do meet them, from time to time, though. And, they show up in the most out of the way places. For every individual in a suit that I recognized as my superior, in one way or another, I've probably met a dozen who would look and feel out of place in a suit. That is, if they could afford a suit to wear.

The size of his bank account is not an accurate measure of a man's worth.

historically, well-to-do states self-limit the birth rate because of economic selfishness. Look at Japan or Scandinavia... They have just 1-2 children (from 2 adults) so that's negative growth. They live a long time, and the children are highly schooled, and well cared for... unlike in India where you have to have 4-5 kids just to make sure a few live to be productive adults so they can take care of you. Also, the strong social programs (medical care, pensions, etc) reduce the need to have kids as economic "insurance", so they're actually a liability in terms of costs to feed, clothe, school, free time, social calender, etc. Rich people have fewer children because it distracts from making money and doing what they want!

Even in the US, the birth rate from non-immigrant citizens is already negative. Growth comes mostly from all the students and workers we import that still have the old views of children for economic reasons.

So evolution is going to magically reverse on itself just when it serves our purpose ?

If any group chooses to limit it's birthrate artificially they will soon find themselves replaced by another group who chooses not to do so - unless external factors intervene (ie. discrimination between those groups, and since it's mostly ethnic differences between such groups, racism).

This happens at an astonishing rate. Suppose population is divided 90%-10%. Suppose also that the majority has a lower birthrate (1.5 per

You forgot to mention 'programmers'... a whole section of the/. population would be out of work. The mean task of turning structural drawings into physical or logical reality is something which computers will be able to do far more efficiently than humans. Programmers are construction workers of logic instead of wood, steel and concrete. Architects might survive a bit longer before they, too, are made redundant.

The programmers will be safe for a few machine generations past the grocery store baggers I suspect. It's quite possible that the accountants, studio musicians, programmers, carpenters, and such finding themselves without jobs will be the catalyst to turn us into socialists.

Don't be so certain. Mental tasks are frequently much easier to automate than physical tasks which require interaction with the physical environment. Successes in dealing with this interaction are frequently achieved by limiting the kinds of interaction that are allowed to happen. So grocery store baggers are probably more difficult to automate than, e.g., cash register clerks. This can be solved, however, by having bag dispensers and having the customer bag their own groceries. (Note that this doesn't

I think you overestimate the number of people that would be subject to that kind of reasoning. How many programmers are given the task of simply implementing absolutely complete and logically consistent specifications?

Ideally there should be another choice: 3) send the dumb ones back to school.

We all know that is not going to happen because:

1. they don't wanna go to school in the first place2. the educational system in its current state is not economically viable for these people (nor the society actually footing the bill)3. like any parasite, they will get together and lobby for free handouts while opposing progress, like they have always done (churches, exclusive communities, 3rd world expats)

They are not going to starve. If there's one thing to learn from poverty, it's that it makes people revolt and rebel. Welfare is a means with which to pacify the poor so you'll have at least some form of social order in a society where unemployment exists.

i think it may have been Robert Anton Wilson who said "unemployment is a benefit of a technologically advanced society" - and i have to agree with that view really. afterall, we are always inventing 'labour saving devices' - and this is really just an extreme extension of that, indeed perhaps one should say it's the ultimate extension of that. i believe we will eventually replace most human work (whether it be thinking-based or labour-based work)

I assume you're trying to be funny, but I have a couple objections here:

First, what makes you so sure that service reps, construction workers, and traffic cops are all stupid? It's true that some of these people might not have very intellectually taxing jobs, but that might not be the extent of their ability. Einstein was just a patent clerk, after all. But also, some of these jobs do take some intelligence. For example, a "construction worker" might not be using his head too much if he's sweeping up trash, but at a certain level, you need a certain understanding of physics and engineering to do good carpentry.

And what do you do that's so smart? I've known people in IT, both on the support and coding side, who were relative morons. What if AI someday handles those jobs too? Are you sure that you won't be counted among the "stupid people"?

My second problem is this idea of letting people starve or "giving them welfare". If we ever really get to the point where robots/AI can do most of the work for us, and no other new work shows up as being necessary, then won't that completely reshape the economic landscape? I'm not sure "giving people welfare" will make a lot of sense in that context, given that we should all be living lives of leisure at a minimal cost.

I anticipate someone saying, "well, no, because resources will still be limited, and there won't be enough robots to go around." Ah, so then robots still won't be able to do everything for us, and we'll need people to do the remaining work. Looks like we have jobs again.

And there's the problem with your notion of "Let them (the stupid people) starve". What makes you think the stupid people won't all revolt at that point? Or assuming they don't revolt, why wouldn't those stupid people get to work providing for themselves? I mean, if they have no food because they have no jobs, then won't they also have all day free to find ways of getting food? Again, you have work.

To the extent that your post is serious, it shows a serious lack of understanding.

In 1831 90% of the USA population worked on farms. Today that is less than 2%. As technology improved the number of people required to produce food has greatly diminished, and people were talking about the same problem these robots and AI's might cause. What would all the farm workers do! Most went to factories then the service sector.

The same story is told about virtually all technological progress, from seamstresses rioting over sewing machines, water powered mills, the steam engine, and the modern factory displacing cottage industry, pundits have shouted that there will be widespread unemployment, riots, pandering, and society will collapse!

They were all wrong.

Big changes do cause short term upheavals, and a truly intelligent AI mated with a general purpose robot will cause huge changes to society, but these changes will free people from boring manual labour to do more creative work. And the non-creative? They'll do their one days work, or one hour, or none, then watch tv just like they do now.

The 5 day work week was a radical change. Eventually technology will bring us the one day work week then no work. Trying to ban technology won't stop it. Society will be greatly different. I think overall people will be more free and happier when we live in a post-scarcity society.

"...An interesting thing to note is this: When a computer exists that is as intelligent as a stupid human, almost every job at and close to minimum wage vanishes..."

While it may seem "obvious" this is not correct. There has to be cost benefit.

I work in a medical lab - you'd *think* that it would be more cost effective to employ robots to handle cups of "body fluids" - and in some cases it is. But as of yet, we have a lot more people than robots, not because the robots aren't capable, but because they are just too damn expensive for the volume of cups we process.

A second follow up to your post is that "minimum wage" jobs are not the only ones targeted. In fact, again in our case, the more expensive a job is, the more likely that job is to be replaced by automation when the automation is available.

We have two labs - one which requires complex sample prep. This takes an educated person many steps. "Educated" = money and "many steps" = time and together equals "lots of money" - and has been the first area targeted for automation. The second lab does not require a 4-year degree, and the sample prep is about as difficult as data-entry and pouring from a cup to a tube. Here the economics are such that it's *better* to have hired help rather than robotics.

My final point:Robots break. When robots break everything halts. This is immensely expensive from both loss of productivity and the repair itself. By contrast our man-operated lab can always do "something" even if the electricity goes out or the computer network goes down. Humans are much more adaptable that way (though they do tend to bitch and moan more).

See Manna [wikipedia.org] for one sci-fi view of what happens when robots take over all of the jobs. It is supposed to be realistic, but I think it, like a lot of near future sci-fi, overestimates the speed of technological progress.

I had a similar discussion with a friend a few weeks ago about something similar. We were not talking about AI taking over the unskilled jobs; we were talking about the rather tight coupling between full time corporate employment and health insurance in the US. My contention was that having those uncoupled would allow much greater economic flexibility and production efficiency for the country because the risk of leaving a corporate job and starting a venture would be greatly reduced if comparable indepe

Not robots or AI, but we've already put a zillion people out of work with technology. Look at construction, alone. The most backbreaking labor in construction has always been the dirtwork. Oftentimes, more work went into preparing the dirt UNDER the foundation, then the foundation, than all the total work that went into the structure standing ON the foundation. (depending, of course, on the purpose of the building, etc) We've had backhoes, trackhoes, 'dozers, and other earthmoving equipment for decades

I wish people would put those down. Asimov was a great author, but the Laws of Robotics are silly. For it to be something an AI can't just alter its program to get rid of, it would have to be hardcoded. So, hardcode the concepts of "harm" and "inaction" in such a complete fashion that it can't find any loopholes. Then have fun rebooting the stupid thing everytime somebody falls off a ladder. Or worse, dealing with its guilt. This is of course aside from the fact that you're not likely to convince anybody to even try programming the First Law, since one of any AI projects main sources of funding is bound to be military. Then again, maybe the military is pig-stupid enough to try a version of the First Law where foreigners aren't considered human...

Of course, it's all moot anyway. My points here [slashdot.org] basically boil down to the Zeroth Law being implicit in any superintelligent AI's existence. So, the other three are basically irrelevant.