RSS Feed

Email Feed

April 20, 2004

Killed by Goo!

You might die someday of nanotech-related causes, and the culprit very well could be gray goo -- no, not like in John Marlow's scary Nano novel, or Michael Crichton's silly Prey -- but if you are killed by nanotech, the cause likely will involve the concept of "gray goo". Let me explain.

Here are three different ways gray goo might kill you, in ascending order of probability:

3. Tiny nanobots swarm over and disassemble your body, atom by atom. Chances of this occurring? Probably less than being struck by an asteroid. Theoretically it could happen, but don't worry about getting life insurance to cover it. We've written before that goo-type machines not only will be very difficult to design and build, but also that other malicious kinds of nanotech will be easier to make and more efficient to use.

2. Public worries over gray goo, fanned by ill-advised critics like Britain's Prince Charles and immoderate organizations like the ETC Group, lead to a ban on the development of molecular manufacturing technology in the United States and most of Europe. Predictably, this only allows nations with fewer scruples to develop and make use of the technology -- eventually selling ultra-lethal weapons capability to transnational terrorist organizations, who attack without warning. Instead of making a cumbersome gray goo, they simply send hundreds of millions of miniature super-computer-guided stealth aircraft to devastate every major city in the developed world. And that's how you die. Chances of this occurring? Higher than you might think.

1. Authoritative U.S. scientists and nano policy makers persist in their ridicule of gray goo scenarios and their dismissal of nanobots as "impossible". As a result, no policies dealing with the consequences of molecular manufacturing are debated and no attempt is made to cooperate on an international level with other potential developers of the technology. When, a few years later, it is discovered that a heretofore secret program in a non-aligned nation may be on the verge of a breakthrough, other major powers decide to jumpstart their own programs. Before long, several countries find themselves involved in a new, highly unstable, upwardly spiraling arms race. The next war -- truly the war that does end all wars -- ends your life, and billions of others. Chances of this occurring? Probably high, if present trends continue.

So goo itself won't kill you, but extreme reactions, whether of hysteria or of denial, just may.

0. Spurred on by descriptions of the threat of widespread availability of nanotechnology, genetic engineering, and similar technologies with dangerous potential, the United States joins with several other technologically advanced nations to form the "Bureau of Technological Regulation". This agency scans scientific research, patent applications, and as necessary even engages in covert survailance, in order to identify dangerous technological trends, and halt or limit them by any means necessary.

Soon, a troubling phenomenon is noticed in the high tech world. Start ups which looked promising suffer the most peculiar bad luck. Some of the most brilliant minds just vanish. Looking up old tech journals, peculiar holes are found. And the technological revolution which looked so promising dies, nipped in the bud.

Twenty years later, mankind is wiped out by an entirely natural disaster we might have been able to survive had we been a bit more advanced.

This will probably happen but it's not that likely to lead to all-out war. Remember that we got through a nasty arms race with the USSR without full-scale war ever breaking out. MAD will keep the major players from trying wild gambles, fear of MAD will encourage defensive research, and nanomedicine will integrate with the defensive weaponry. Plus nanomanufacturing will provide much easier ways to make a profit than launching a war.

Ideological/religious fanatics will remain a problem but as parasites they'll always be a step behind in development.

I tend to agree with Karl; Deterence can still work against attackers who are reasonably rational, and can be identified. And the potential for vast increases in standards of living without taking over other people's resources removes a lot of the incentive for war.

It's the loons you really have to worry about, and they're going to be further behind the curve than governments.

All the things that made the nuclear arms race stable will tend to make the nano arms race unstable. I don't see anything that will tend to make it stable.

The loons will be behind the curve, but that doesn't keep us safe. We don't just have to have better weapons than they do. We have to have better defenses than their best weapon. That's a *lot* harder. Especially when the system is complex enough that the average weapon can be used to attack it from multiple angles.

chris, your questions
1. a great way to put nano into the political spotlight is to link it with the hottest topic: terrorism and defense. What kind of intelligence gathering could we acheive with nano-tech? POW and suspects could be labelled and tracked without ever knowing it.
What would a president do if they knew our enemies were developing nano-tech to attack us? are they doing this? Or will they do it in China... who has a record of selling arms to anyone who wants them, and is de-regulating its economy (and technology) more and more inthe name of progress.

If nanotech were discussed on a political show I would watch. I would love to see Bush even try to conceptualize the impact of an as yet abstract power.

I'm not (too) worried. Defense against the misuse of most technologies usually comes from the same technology. We learned to detect radiation before anyone figured out how to build a nuclear bomb. Aren't they developing DNA chips to detect bioweapons? I'll bet that some day we'll have "Swarm Detectors" as common as smoke alarms.

Chris--if we achieve Brin's transparent society nanoweapons will be easier to monitor than nukes. Tracking soviet nukes was a very difficult technical problem that had a LOT of money and talent thrown at it to solve. The solution provided 5-30 minutes of warning so more resources went into providing a system that could react that quickly. What systems were under development was hidden.

Nanoweapon designers, OTOH, could have pictures taken of their first napkin scribbles and the word would spread before they got the bugs out of their design, let alone start manufacturing useful quantities of it.

I haven't yet read Brin's book but have heard a lot about it. If I understand it correctly, he envisions something like a "panopticon with a human face," if I may badly mix metaphors here. I am skeptical of this though.

The reason I am skeptical is because I see a lot more of the watchers doing the spying and a lot of laws being passed to prevent the watched from spying back. The Rodney King video is the exception not the rule. Whistle blowers are still few and far between.

I can't just walk into a K-Mart and start taping and imaging everything because, they have every legal right to throw me out if I do. On the other hand K-Mart has many legal rights to do almost anything it wants with my private data when I shop there.

Until this disparity between the group and the individual is repaired, I am skeptical that Brin's transparency will come to pass.

But perhaps I am missing something here. I think I'll have to get his book to see if my questions are answered.

I think you're looking at the key point Brin tries to make--unless everyone is free to look at everyone, groups (gov'ts, corporations, conspiracies) will be able to violate privacy rules and individuals will have no way to fight back. So the solution is tossing all laws and rights to sue over privacy into the trash so we can be on an equal footing. He doesn't avoid facing that this is a MAJOR change from our current society. Try the first chapter:
http://www.davidbrin.com/privacyarticles.html#ts

Karl, I agree more or less that if we deploy a pervasive surveillance network worldwide in every volume, with good software to back it up, and significant energy is competently directed at the goal of preventing bad nano development, then we may not have to worry too much about unexpected nano-weapons or goo, at least for a while.

Of course, this would be incredibly intrusive. I wouldn't trust any current government with that power--though most governments will have that power soon whether I like it or not.

It may not be necessary to surveil people, if you can control and monitor the machinery. You can't do nanotech without machinery, including computers. If you control or monitor the computers (including things like Xilinx chips and Basic Stamp boards and Lego Mindsprings controllers that can be used to build computers), then you control access to the nanoscale.

So I hope we don't have to depend on watching people to prevent them doing bad things. Note that in Brin's proposal, there'd be places the cameras wouldn't be allowed to go. But if people can create problems-at-a-distance, then you have to monitor everything all the time. No exceptions for nakedness, or the bad guys would simply get naked before scribbling on the napkins. No exceptions for anything. And the preprocessing software would have to highlight the most unusual activities for human assessment, unless the software were sophisticated enough to be sure there are no dangerous machines in the room--or under remote control somehow. So I really hope this solution isn't the one that's implemented.

Grey goo may be a more insidious weapon but that does not make it effective until the bugs are ironed out. Molecular programming would be subject to a more random and unpredictable environment that already evolves, probably at the molecular level. If it commits acts of war upon biological organisms, then it follows that biological organisms will adapt accordly, defend with extreme prejudice and defeat or prevent such infections as enemy or develop a hardened epidermis and tougher augmented fat layers that entrap molecular programming devices and shut them down in beige goo that is inherently bureaucratic...

Nicholas, biological organisms can only adapt via evolution, which requires multiple generations, or built-in reactions such as inflammation and encystment. Evolution takes far too long and assumes that some organisms survive to breed. Built-in anti-parasite reactions are unlikely to slow down a diamondoid goo; it'd be like you trying to arm-wrestle a forklift. It's conceivable that an immune system here and there might get lucky and evolve a protein that managed to jam the sorting rotors, but it takes days even to clear up even biological infections that aren't trying very hard. A goo probably wouldn't leave you that much time.

Besides, at least one possible goo type doesn't attack organisms; it gets carbon right out of the air, and uses solar energy for power. Eventually (perhaps rather quickly) it blocks the sun.

Goo is a relatively small danger, but only by comparison with other nano dangers. We can't assume that it's survivable in the absence of foresight.

The current problem with nanotechnology in the form of "killer" robots, is the anatomy that is needed to allow this to happen, there life expectancy is measured in a far different scale to ours and as such would not normally have a very long shelf life once activated, hence it would first need to be able to "reproduce" itself before it can even be a worthy topic to worry about.

Second there is a serious lack of proper methodologies needed to allow for the existence of a singular nanobot much less an entire swarm of them.

Finally, there is no one area of programming that currently exists now that can allow for the anatomical operations, storage, & processing within the nanobot itself to allow for this scenario mentioned above to exist.

we are at least 50 to 100 years from making plausible models of that caliber, until then they will be more of a specialized area involving medical and military application long before civilian applications come into play using current trend in technology as a medium.