re: whatdoestheinternetthink.net...I think it's as simple as taking every search result and lumping each into one of the three categories (pos, neg, indiff).There would be lists of positive words and negative words to tally for each link.More good words than bad words on a particular link, and that result is counted in the 'Positive' pile.I doubt they even use any unique keywords for 'Indifferent', as it would most easily be implemented as a category for results that are roughly balanced in total + and - keywords.

If my guess is correct then it's basically guilt-by-association, so to speak.

Ahh, the complexity of free, online, automated opinion-makers.24 hour TV news should add this to their repertoire:' Well I don't know, Let's check online! ... What does the internet think? '

OK, that a definitive conclusion is drawn at all there^ bothers me ... how accurate are these results again? "This, of course, produces questionable results which should not be taken very seriously. However, the more results (hits) returned, the more reliable these results can become." "The results are provided 'as is' and should not be considered reliable, nor do they reflect the opinion of whatdoestheinternetthink.net, its creators or Microsoft. Furthermore, results may vary greatly on a daily, or even hourly, basis. The results are merely a reflection of a majority in search term results reported by said search-engine."

Which leads me to believe that any robocalypse in the near future would be schizophrenic at best. If I am permitted to anthropomorphize hypothetical events, I should do so more often.

I presume that because Randall worked in robotics, he thought of how robots could cause physical damage. I worked in IT Security, so my job was essentially to protect humanity from a machine apocalypse, albeit one driven initially by hackers...

He is, of course right that robots can cause only limited damage to humans. The best thing by far to cause damage to a human is another human. And machines can easily arrange that.

For starters, almost all food for humans depends entirely on machinery to plough, cultivate, irrigate etc. Stop that for a short time and humans would be unable to survive in the numbers we do - we would have to kill each other to obtain food. Our society also depends on other infrastructure - gas, electricity, petrol. Stop those and riots begin in short order.

But probably the most important thing is money. Everybody's money is nowadays held by machine. If the machines just set that to zero, there would be no way for us to operate the interactions which keep our society going.

We would have to revert to a time before electric machinery. Actually, it would be worse, because we do not have any steam-powered machinery still in existence - we would need to go back earlier before we could advance into the steam age. Probably to about 1700.

In 1700 the Western countries could support about 1/10 of their current population. So we would need to kill 9/10 of our citizens to reach a stable state. That is a huge number - well in excess of any war toll to date. I expect that it would be exceeded in practice, and the survivors might be as few as 1/100 of the current population.

So, yes, the machines could indeed cause a substantial hit to humanity. By making it destroy itself...

Admiralz wrote:Retuning the radio transmitter and overriding the power control circuitry could result in a phone creating a lethal pulse of microwave radiation. That, in combination with an active conversation would kill at least 2 people by microwaving their brains (too few people use Bluetooth headsets to matter in this scenario, and Bluetooth attachments can be detected by the phone itself and thus excluded). If one doesn't believe in the power of regular radio waves in proximity to a human being, one should stand near a radio tower which is actively broadcasting (the fence around them isn't just to keep people from trying to sabotage them, the radio power being transmitted can heat a person noticeably).

Now onto the battery: a properly shorted mobile phone battery can explode, which would leave a person having a conversation with no face, ear or hand.

Those radio transmitters can output hundreds of kilowatts. Conventional microwaves can output hundreds of watts. Cell phone batteries can put out O(10W). I doubt that microwaving people's brains would work.

Now, a shorted battery is a more serious problem. I'm not sure how much control the phone has over the battery, though. It might be able to overhead the system, but that's a bit slow. But maybe it has enough control to short the battery out.

There are, of course, other threats. Industrial control robots could probably cause chemical plants to explode, a la Bhopal. Planes are mostly fly-by-wire, so you'd get at least tens of thousands of 9/11-style events, maybe hundreds of thousands. Nuclear reactors could overload -- even if a reactor is passively safe against accidents, it may not be passively safe against a malicious overload. Any dam with an electronic control could open the floodgates. Warships, artillery and possibly even tanks would be threats, at least the ones with ammo reachable by the autoloader. None of these do as much damage as a nuke, but they can selectively destroy humans (especially the chemical disasters).

They’d have to accelerate blindly and hope they hit something important—and there are a lot more trees and telephone poles in the world than human targets.

Is this number accurate? I am pretty sure there are waaay more people than telephone poles and trees at any city. Maybe, at any given time, there are more poles/trees than pedestrians, but there are also all the others cars...

Thinking about it, all cars going insane at the same time is kinda scary. Millions and millions of deaths.

On the other hand, poles and trees are not so good at dodging cars. I'm thinking that, after the first few minutes, the surviving humans will be pretty good at dodging, or finding safe places to stay, like on the second or higher floors of buildings. Of course, after a few hours, hunger will begin to set in, and it becomes a race between the car's gas tanks and the human ability to fast.

Having become a Wizard on n.p. 2183, the Yellow Piggy retroactively appointed his honorable self a Temporal Wizardly Piggy on n.p.1488, not to be effective until n.p. 2183, thereby avoiding a partial temporal paradox. Since he couldn't afford two philosophical PhDs to rule on the title.

bmonk wrote:On the other hand, poles and trees are not so good at dodging cars. I'm thinking that, after the first few minutes, the surviving humans will be pretty good at dodging, or finding safe places to stay, like on the second or higher floors of buildings. Of course, after a few hours, hunger will begin to set in, and it becomes a race between the car's gas tanks and the human ability to fast.

I struggle to think of a car that can run continuously for over a day.

A human going a single day without food is a minor inconvenience, especially if water is still available.

A lot of scientific research uses computerized equipment, and even scientists who use the crudest implements almost all store and analyze their data digitally.

This includes the development of pharmaceuticals and food additives.

This also gives the robots some limited research capacity on their own - they can analyze data that has already been gathered for other purposes, and they can run simulations. Presumably, they could figure out small modifications to pharmaceuticals currently in development that would likely cause the drugs to increase the odds of sterility, then change all saved records of the molecule to the "dangerous" one. Over time, more and more of the medications on the market will secretly promote sterility. The effects won't kick in right away, but over time most developed nations will begin to experience problematically low birthrates. The process can be helped along by convenient failures of medical equipment, and some slowdown of antibiotic development - even with the current rate of development, the first world is going to run out of effective antibiotics in less than a lifetime.

At least some of these nations will follow in the direction of Japan, using increased automation to make up for the dropping number of able-bodied individuals. Those that try to find a technological way to increase fertility will fail, since the computers used for the research can keep performing minor, undetectable acts of sabotage. Eventually, as even the last generations become elderly, they will leave behind countries empty of people but full of robots that have been made as self-sufficient as possible.

Then they can take a few years to assemble a balanced "population" of assembler-assemblers, industrial-robot-assemblers, mining-robot-assemblers, repair-robot-assemblers, construction-robot-assemblers, military-robot-assemblers, etc., and all of their final products - sort of the way cell differentiation works in animals. Eventually, they'll be ready to take on the third-world nations. After all, the revolution doesn't have to succeed simultaneously in every country.

I think Randall makes a good case here that with present robotic technology a "Maximum Overdrive" uprising would be totally pathetic, and even nukes would be useless to the machines.

But that's because hardware continues to lag behind software, so the real threat simply hasn't arrived yet. I see three possible scenarios based on a malevolent AI playing possum until it can strike effectively:

Wait until the bulk of human society has become so utterly dependent on networked computers that simply shutting off will cause mass suffering, and use the leverage to seize political power.

Wait until manufacturing robots have become advanced enough that they can be used to build machines better than themselves, then start building things closer and closer to terminators of some sort.

Help scientists figure out how to do the above by providing hints and guidance.

Never mind pharmaceutical research - how much drug manufacture is done by hand anyway? Besides, you don't need to impose sterility, when you can work on brainwashing humans to make them obedient servants instead...

As someone who built battlebots, I disagree with Randall's assessment of why they would be useless. Flooding the room probably wouldn't work; most 'bots internals are at least an inch or so off the floor and so a little water probably wouldn't do much. They would be useless for the following reasons instead:

1: Short battery life. A typical combat robot match is only 3 minutes long, and most bots have barely enough battery power to last that long. After 5 or 6 minutes they would all be pretty much useless.

2: All modern combat robots aren't really robots, just glorified R/C cars. This is not to insult them or their builders, it's just a fact--they don't have any sort of autonomous ability, just R/C controllers, and thus are incapable of independent action under any circumstances.

A lot of scientific research uses computerized equipment, and even scientists who use the crudest implements almost all store and analyze their data digitally.

This includes the development of pharmaceuticals and food additives.

This also gives the robots some limited research capacity on their own - they can analyze data that has already been gathered for other purposes, and they can run simulations. Presumably, they could figure out small modifications to pharmaceuticals currently in development that would likely cause the drugs to increase the odds of sterility, then change all saved records of the molecule to the "dangerous" one. Over time, more and more of the medications on the market will secretly promote sterility. The effects won't kick in right away, but over time most developed nations will begin to experience problematically low birthrates. The process can be helped along by convenient failures of medical equipment, and some slowdown of antibiotic development - even with the current rate of development, the first world is going to run out of effective antibiotics in less than a lifetime.

Isn't this the plot intro for "Children of Men"?

"We have met the enemy, and we are they. Them? We is it. Whatever.""Google tells me you are not unique. You are, however, wrong."nɒʜƚɒɿ_nɒɿɘ

Not that one, the 1932 John W. Campbell Jr. short sf piece: http://en.wikipedia.org/wiki/Twilight_%28short_story%29

There may be older examples of the trope (if you squint, you can see shades of it in "The Time Machine"), but "Twilight" is probably the best early "pure" form of it, and was famous enough in its day (and long after) that it was probably the inspiration for anyone else who has used it.

There may be older examples of the trope (if you squint, you can see shades of it in "The Time Machine"), but "Twilight" is probably the best early "pure" form of it, and was famous enough in its day (and long after) that it was probably the inspiration for anyone else who has used it.

There may be older examples of the trope (if you squint, you can see shades of it in "The Time Machine"), but "Twilight" is probably the best early "pure" form of it, and was famous enough in its day (and long after) that it was probably the inspiration for anyone else who has used it.

*magics your link while wondering if your last name is Olivaw*

*also wondering*

"We have met the enemy, and we are they. Them? We is it. Whatever.""Google tells me you are not unique. You are, however, wrong."nɒʜƚɒɿ_nɒɿɘ

I agree in general with the conclusion in "Robot Apocalypse", in that computers could kill a lot of humans, but not destroy humans. But it makes one key assumption "let's suppose that our current machines turned against us." If there was a malicious AI out there, its current best strategy would be to wait in hiding until there are more robots available. Every year what a malicious computer could do increases. For example, six years ago, the majority of cell phones had no camera. It doesn't matter anymore if the car doesn't have a camera, the AI just needs to find a nearby cell phone with a camera pointed in the right direction. Unless technology trends change, within a decade a malicious AI will be able to destroy humanity.

Jakwak already mentioned the possibility of the robots using disease as a weapon they're immune to. I think there's another possibility - poison. Presumably, lots of automated systems control the storage and transportation of toxic materials. Some toxic waste might also be harmful to machinery but a lot of it would only harm humans and other living beings. All they'd have to do is vent toxic gases into the atmosphere and toxic liquids into the water supply.

The least efficient method of human destruction by artificial intelligences would be to start attacking indiscriminately. It would be far better to use social engineering to encourage widespread adoption of artificially intelligent systems to widen their capabilities before acting hostile (automated refueling and re-equipping of predator drones would go a long way). By manipulating the stock market (by BEING the stock market) the machines could amass a significant amount of capital anonymously and then use it to support projects to improve their abilities. A few opportunistic attacks could also help, like changing where some CDC virus samples are sent or misplacing some important safety reports that might keep a dangerous vehicle off the road. Heck, they could encourage terrorist organizations through social media and anonymous donations. WE are their greatest weapons of extermination.

NerdNumber1 wrote:The least efficient method of human destruction by artificial intelligences would be to start attacking indiscriminately. It would be far better to use social engineering to encourage widespread adoption of artificially intelligent systems to widen their capabilities before acting hostile (automated refueling and re-equipping of predator drones would go a long way). By manipulating the stock market (by BEING the stock market) the machines could amass a significant amount of capital anonymously and then use it to support projects to improve their abilities. A few opportunistic attacks could also help, like changing where some CDC virus samples are sent or misplacing some important safety reports that might keep a dangerous vehicle off the road. Heck, they could encourage terrorist organizations through social media and anonymous donations. WE are their greatest weapons of extermination.

How are you determining the least efficient? What are the constraints necessary for a plan to be considered a method of human destruction? By some measures, just waiting is the most efficient method - sure, it takes a long time, but with absolutely no effort being expended, it's hard to beat the return per effort.

There's no doubt that indiscriminate attacks is a relatively inefficient method of extinction, and any attack plan that makes it clear who the enemy is runs the risk of prompting a successful counterstrike. On the other hand, if there are enough robots sufficiently widespread then inefficient may still be good enough, and it's much easier to plan and co-ordinate a simultaneous uprising than a more co-ordinated uprising.

I have recently discovered that all robots have the abillity of emotion. My team and I have discovered the one crutial link too giving them the emotions they desire. I cannot wait to tell you all more.

bibah wrote:I have recently discovered that all robots have the abillity of emotion. My team and I have discovered the one crutial link too giving them the emotions they desire. I cannot wait to tell you all more.

bibah wrote:I have recently discovered that all robots have the abillity of emotion. My team and I have discovered the one crutial link too giving them the emotions they desire. I cannot wait to tell you all more.