Globally existential threats due to ‘overpopulation momentum’ together with the top-heavy age structure leave by now no alternative to radical technological adaptation for anything that wants to survive 'long term'.It is strictly too late to ‘go green’ except via a novel take on what constitutes ‘green’, including synthetic biology.Hyped for a long time, nanoscience is still largely in its pioneering phase.However, it matures as we speak and soon, as it becomes true nanotechnology, it will leave the hype far behind.

Is nanotech harmless?Mother Nature has adapted via nanotechnology all along, namely with nanometer sized replicators and catalysts. The latter we call “enzymes”, but catalysts they are nonetheless.Humans are nanotech robots that nature made, complete with what may equally count as “artificial intelligence”.Thus, one is tempted to brush fears about dangers away, much like the naturally existing interactions of cosmic radiation in atmosphere and plants refute the alleged existential dangers due to the Large Hadron Collider (LHC) or Genetically Modified Organisms (GMO), respectively.However, nano being basically bio, this equivalence implies that nanotech is also at least as dangerous as biotech.But it would be missing the real dangers if we were to stop here and merely list the increasingly long list of bad news about, for example, nano-junk accumulating in surprising locations and it being highly biologically active in unexpected ways.

Nanotech is intrinsically much more dangerous then biotech.For example, although organisms can easily be evolved to synthesize metallic or anyways highly complex compound-material nanostructures (just look at your teeth), nature has never touched metallic crystals as bio-catalysts; and for good reasons: they are so reactive that nature could never before handle them.

Nature’s co-evolution of all systems with their environment will be sped up, which is natural and no danger to nature itself in any meaningful sense, however, the pace will be larger than human biology (even if techno enhanced) and society can handle.Basically, this is yet another way, next to the production of AI/robotics, in which we ensure our own demise.The nanotech researchers’ engineering mindset and its enthusiasm for pushing self-assembly and evolutionary methods render nanotechnology dangerous beyond sporadic health, safety and environmental concerns.Say hello to an exciting new, globally existential risk.

Additionally and ironically (given its especially hazardous status), nanotechnology is afflicted by relatively low ethical standards and an almost unscientific alchemist culture. Bio-research is close to medicine, and an ethically caring culture dominates that research community; high status of advanced statistics to avoid bias and concerns about unintended consequences are central to such fields.When comparing sciences like high energy physics (where everything is checked and criticized as a rule), chemistry, or medicine, nanotechnology comes in far below even on such basic measures as regard for the scientific method, say reproducibility.There is an ‘only-good-news’ attitude in nanotech peer-review, where critical work is seen as worthless or treason rather than necessary for good science.

There are ethics committees and conferences on governance and oversight and all the usual nonsense that is primarily there for show and the careers of the involved, but none of it can have any substantially directing effect on future evolution, which will go on in its integration of the technological substrate into the biosphere.Human participation, previously vital for advanced social and technological evolution, is now turning from being an already overabundant resource, shaped and discarded much like any other cheap material, to something that rather resembles a polluting byproduct.In a large fraction of futures, it will stay around abundantly for a while, like oxygen did, but there is no consistent future in which humans will not have been altered to be basically something else entirely (relative to our current self-identification).

Comments

I tried to keep this beyond good and evil, but since it developed out of an extended abstract for a conference (rejected of course, no surprise), some of the academically required athropocentrism (danger, existential thread, blah blah) has survived. If one wanted to be more objective yet still squeeze it into ethical terms, surely it is positive, since all suggestions of improvement agree on the desire to get rid of humankind as it is now.

The closest it's come is its use of metallic ions, but there's a world of difference between metallic crystals and ions.

Human participation, previously vital for advanced social and technological evolution, is now turning from being an already overabundant resource, shaped and discarded much like any other cheap material, to something that rather resembles a polluting byproduct.

I guess I disagree with the idea that "nanotechnology is afflicted by relatively low ethical standards and an almost unscientific alchemist culture" and that "there is an ‘only-good-news’ attitude in nanotech peer-review". I also don't like that you've indicted oversight.

There's a fear element here that you are stoking just like those do for biotech. The fact is that this is precision technology. Some places where it works well (that I'm familiar with) are in food packaging and in delivery of compounds into plant materials. The materials can be formulated to break down in the environment. This can be good stuff.

We tried to publish a paper on polyhydroxylated furaneols a few years back and had lots of resistance. We showed (generally positive) effects on various creatures in aqueous environments (Gao et al., PLoS One, 2010). It was hard to get this published. We tried the big journals and nobody would look twice, even though it is interesting work.

Like anytime we have the power to create technology we need to cultivate the wisdom to use it properly. Science is good stuff. Let's keep creating, using it properly, and learning how it can help the human experience.

I guess I disagree with the idea that "nanotechnology is afflicted by
relatively low ethical standards and an almost unscientific alchemist
culture" and that "there is an ‘only-good-news’ attitude in nanotech
peer-review".

Well, I only put one link since I have my modest pants on today, but I have written pieces before precisely on these charges (look at Nanotechnology section here) and also discussed it for example in the archive version of my memristor paper. I have no interest in stoking fears.

Held in check by lack of local processing power for the time being. Next stage - a race between nanotech evolutionary methods and AI evolutionary methods. Then comes an unholy symbiosis where nanotech subsumes AI and destroys bio-intelligence before Gaia wakes up and decides that life isn't worth living anyway. Fermi's paradox solved.

I was hoping not to read that. So a self-assembling and evolving nanotech is completely beyond our control, once released it will move everywhere very quickly. Sure we could have a pre-prepared antidote but if dealing with a self assembly evolutionary device the antidote may be beaten by the evolving process and then we're stuffed. That's what viruses and bacteria do, essentially it is a numbers game which means any antidote will have to be self assembly evolving. Out of the frying pan into the fire.

Why they would resort to an evolving type is beyond me because evolution is extremely messy, the solutions obtained are typically adequate not optimal, there is a very high error rate which must come with an evolutionary process, and there will be too many unintended consequences to count. We can have no clear idea of the future implications if some forms of this technology are let loose in the ecosphere.

Why they would resort to an evolving type is beyond me because
evolution is extremely messy, the solutions obtained are typically
adequate not optimal, there is a very high error rat

Because the parameter spaces have so many dimensions that there is no other way to efficiently optimize. The solutions may not be optimal, but they are far superior to other designs. That is why I have started on evolutionary methods now myself. Also sells well of course and is very fitting to the characteristic laziness of physicists like me. Why should I think hard if I can instead just let it evolve and then rationalize the result like as if I knew all along?

This was one of Drexler's original concerns, and this being the forecast-ed decade of the assembler, it's likely close. I've been waiting for 25 years for this, and his forecasting has been pretty good.My predictions were we'd start seeing medical, and material applications first, and in various forms we had those for a hand full of years now.

But, he did outline ways to contain the gray goo, but are researchers ignoring the wisdom of requiring components that are not naturally available? I suppose evolved systems leave the possibility of them jumping over the walls we erect to contain them. We must demand researchers treat this technology like the radioactive dynamite it is. And like dynamite we can use it to change the face of our planet, for good, bad or some of both.

One assumption seems to be that if you want to make a lot of something, then self-replicating is the way to go. Biology works that way, however is that the best? We seem to build factories that build stuff, and use stuff + humans to build factories. Would robots building factories that make robots and also stuff be faster and easier than making self-replicating robots that also make stuff? It would seem to be easier to design and what our technology is already set up for. Also of course it would be easier to stop.

Being able to manufacture from atomic scale up offers abilities that can not be done any other way.It will, if we can do it, change the world, you'll be able to grow just about anything in a vat, out of cheap materials with almost no human labor, it will have properties that we can only dream of, and cost next to nothing. Things will be mostly IP, and as we see in software, lots of people give their IP away for free.

Sascha, might have a better list of references, but I suggest you start with Drexler's Engines of Creation.

Sounds like thats a book I should read. That car seems a bit far, you still have to obey the law of thermodynamics etc. I sure can see the business about automation making things cost next to nothing, with many potential resulting problems of course.

Thanks Sascha, I wasn't aware that there was so much potential conceptual space to explore that it unrealistic to do so. I thought nanotech would have much more tightly constrained opportunities for design so I'm missing something. My original concern was about if one of these self assembly evolving nanos escaped from the lab. I had in mind the incredible containment strategies that must be used in virology labs, I am still struck by the speed and pervasiveness with which molecules can scatter.

My evolving thought processes turned towards antibody production. Our bodies will produce billions of antibodies in the hope of finding a match. It is an incredible thing, a numerical race to produce an antibody and then instruct T cells to go ahunting. But I stumbled upon something odd: how does the body know when it has produced the right antibody? What signal emerges when out of all those billions are few shapes identify a pathogen relevant protein amongst all others? When that antibody finds an appropriate shape to fit into it must release a signal of some kind. Some studies suggest it generated nitric oxide which can be inflammatory trigger but diffuses too quickly for specificity. How do we know when an evolving something has efficiently evolved? Sorry OT, and its not like I need another problem to think about ... .

I wasn't aware that there was so much potential conceptual space to explore that it unrealistic to do so

Thinking inside the box is one thing you could never accuse Sascha of :)

I thought nanotech would have much more tightly constrained opportunities for design so I'm missing something.

Well here's my two-penn'orth.

Evolutionary systems are Turing machines... in that anything that can be done at all can be done with a minimalist infrastructure given time and space. I agree that little gears and ratchets in small assemblies are unlikely to evolve into autonomous plagues any more than homogeneous chemical mixtures are. However, once those self-catalysing molecules start to stick together... well here we are. The same must be true for nanobots. Once they start breeding: watch out!

how does the body know when it has produced the right antibody?

I think the antibodies are expressed on the cell surface rather than simply poured out into the surroundings. So when there's a lock the cell knows and this triggers the immune cascade. So no deep mystery, just another awesome mechanism to unravel.

How do we know when an evolving something has efficiently evolved?

The fact that it is still around is a clue :) This is probably the most difficult thing to discuss with anti-evolutionists. In a loose sort of way, organisms which survive can be represented by peaks of fitness on a landscape of phenotype-space. It is not immediately obvious how an evolving organism crosses a valley between the mountains. Much is made of this by neo-creationists such as Dembski with his psuedo-mathematical calculation of the probability of such crossings. Of course the whole point about Darwinism is that mountains are actually connected by ridges - if they are not, then the evolutionary path is impossible.

So the answer to your question is that all viable systems have already evolved efficiently. They may not be optimized but they are good enough. Making systems evolve for a purpose other than natural survival generally involves sneaking survival in by the back door. You want a green rose? Cull the least green and breed from the most green of your stock. If the underlying infrastructure supports green roses, your action creates a ridge. It won't be long - maybe a few million years :) - before they appear in the florist's.

p.s. A while back I commented in a blog here about the landscape model. I mention this purely as an excuse to inflict one of my pretty pictures on everyone again :)

Dembski went wrong from the very beginning by assuming that the fitness landscape was a deep black pit with just the occasional pillar sticking up above the level at which the species is viable. Thus he made macroevolution - hopping from isolated peak to isolated peak - impossible in principle. What he studiously ignored was the fact that there's always room for improvement - especially if the landscape is subject to earthquakes (changing conditions!). Thus what look like isolated peaks from which it is impossible escape (left image) always have a few narrow ridges extending away from them, which, of course eventually intersect with others, making the whole landscape a web of intersecting ridges (right image).

Since there is only one total landscape, do I have to thank god almighty for that he made it a landscape full of ridges? In other words: This argument against creationists is so weak, was always so backfiring, those scientistic angry atheists should perhaps think twice about employing it. To argue with a preexisting realism out there is never going to be a good argument, because it is wrong and hands victory to the Dembski types on a silver platter.

As I said: "This is probably the most difficult thing to discuss with anti-evolutionists." However, Dembski's blunders are very crude and are only accepted by naive creationists because he wraps them up in psuedo-mathematics. Of course I agree you should not try to defend evolution if you haven't a clue why so-called "intermediate forms" are just as viable as so-called "species".

And, since you ask, yes, you should thank God for it - even if it is tautological :)

Thanks Derek and @Sascha:Can you please expand more on the details of this existential threat, because I thought things like the grey goo possibility was discredited. Soil bacteria for example fill many different niches and there is no single solution as far as I am aware that is better then all of them. They are also part of a symbiosis etc so any newcomer that out-competes an existing organism and doesn't play nice will be self limiting because it will kill the organisms that make its intermediate products. Or do you mean something a bit bigger, i.e. some cellulose munching perhaps tiny flying creature that is a lot more efficient than existing bugs and resistant to all our available chemicals proceeding to eat all plants?

I agree that our nanotech could evolve but do you really expect us to just start out making a more efficient evolutionary system than billions of years of bio evolution? Anything human tech system that I am aware of has a fitness landscape much like the one on the left, small changes destroy it. Take software, with respect to random change it is like the first example, however it is like the second with ridges when humans start purposely changing the code. Any human tech system I am aware of looks like (2) only with respect to changes made by highly skilled operators.

It seems the "grey goo" versus green goo dichotomy shapes your interpretation. There is only one goo; evolution is color blind.

"do you really
expect us to just start out making a more efficient evolutionary system
than billions of years of bio evolution?"

You really think that you do not belong to evolution? When did god come down and give you that special glow? We are parts of the evolving goo, a way nature absorbs metals into the biosphere. You are stuck in the nature/culture dichotomy. Evolution cannot know such."Existential risk" relates to the usual discourse between humans, what most are interested in, how they can perceive the text as relevant. I hoped the meaninglessness of "risk" would be obvious by the end of the text.

Not quite sure what the point of your article is then, with "superdangerous etc" in the title. No I don't think that I am not part of evolution, but why is evolution continuing with nano-tech somehow special? I can quite easily see that "AI" means us changing beyond what we would identify with, would be considered existential (and requires such control over matter to be called "nanotech" as feature sizes shrink) but I fail to see what ethical standards or lack of of nano researchers has to do with anything. They aren't driving tech that will make the AI we are all afraid of as far as I can see, Intel etc are. (Those tech companies probably sift though the "science" results the same way pharm companies sift through natural remedies looking for potentially useful substances).You seem to be specifically talking about something other than the usual "AI" scenario. What exactly I am not sure.You specifically mention self-assembly, and I took it to mean something than other than the usual robot factories making robots smarter than us. Yes such will require nanotech, but if it just required micron features sizes to outdo our brain in every way then it would still be the same AI scenario.

The point is to seem as if making a point while making the opposite point. This has always been done in such ways with the unwelcome points.

why is evolution continuing with nano-tech somehow
special?

Via tech, stuff that was not used before, for example metals, becomes more actively employed. Humans find special what they happen to be involved in. It is interesting to figure out what is special from more objective perspectives.

what ethical standards or lack of of nano
researchers has to do with anything.

IEET and universities and so on spend lots on conferences about such. In the big picture, such details mean little, but humans evolved to care, that is how social structure acts in the physical world. "Ethical standards" are interesting emergent phenomena.

They aren't driving tech that will
make the AI we are all afraid of as far as I can see

I am not sure what AI you mean that we are all afraid of that will be made (a narrative that deceives about the present state of information technological integration). Reserving the label "AI" until a terminator knocks at our door is part of our smooth assimilation. Why should advanced structures be less evolved and integrated than usual social structures, which usually cull us by us killing each other? How many futures will have nanorobots 'very directly' involved in assimilating/removing us is a detail.

You specifically mention self-assembly

Everything made by us is self-assembly in biosphere, but more localized self-assembly is more independent from the involvement of specific systems like humans, so humans fear that. Again, fundamentally, there is no point if you do not already believe that there is one.

Again, fundamentally, there is no point if you do not already believe that there is one.

It's not entirely clear what the point is even if one does!

I hope you simply mean that nanotech is just another substrate for ubiquitous evolution and that discussing it in anthropocentric terms like "hazards" is a distraction from seeing it objectively. To go further and claim some sort of inevitability in the final outcome ...

but there is no consistent future in which humans will not have been altered to be basically something else entirely

... would be the height of hubris: we do not know the equation of state for evolution, we certainly haven't solved it, we cannot possibly know what attractors there are.

We don't know what the attractors are, but we have a pretty good idea about some things. Whether it is "I" desiring to make myself smarter, my brain go faster or evolution doing it and "I" kidding myself that we are in control something sure is pushing this. Faster, smarter and more control over my feelings/desires seems like what humans or similar are heading towards.Not sure sure about humans being something else entirely. If I desire and achieve the ability to change my own source code surely the result of me doing so is still me even if it may not look human to an outsider. It gets a bit strange if you try to say that evolution has caused me to change my own source code and in spite of feeling myself make the decisions, I am not the same "I" anymore.

I don't see how we can know how much of an attractor rationality or consistency is without actually going down that path. After all I can choose to make myself not care about rationality, then what? It gets a bit bizarre and paradoxical as far as I can see. What if I desire to always feel X about Y but know that the vast majority of paths where I get more intelligent, rational and logical, I wont. I then make virtual more intelligent copies of myself with slight modifications, run them in virtual worlds (perhaps just have them think for a while) and see what they desire at the end when they have finished self modifying. Some of them may end up at different attractors (e.g. I feel X about Y, desire to feel X about Y even with unlimited resources and knowing that system B feels Z about Y I still don't care and choose to keep feeling X about Y. Likewise system B still feels Z about Y even with complete information about me)

I then choose to make myself the more intelligent version of myself that still feels X about Y in spite of the fact that most more intelligent "me's" will feel differently/be clustered around the bigger attractor that feels Z about Y. Is this identity preserving?

Seems to me the problem of what will a self modifying system will do to itself is intractable, but Sascha doesn't think so. Wish Godel was still alive!

To go further and claim some sort of inevitability in the final outcome ...
would be the height of hubris: we do not know the equation of state for
evolution

Of course it depends on what counts as a basically unchanged human and the timescale (future, but not just two years from now), but looking at the present, you will agree that this is not, speaking in terms of punctuated equilibrium, one of the phases where there happens nothing much to speak of for a long time. We are so very unadapted to the environment as it suddenly presents itself to us (relative to biological timescales), there is just no way that this is not basically the end of us. Stricktly wrong (not just hubris) would be to claim to know details about whether and how, because there are many parallel futures.

This has to be the point at which I say "have you read _Last and First Men_ by Olaf Stapledon?" I mention it because the story has humanity swinging through any number of variations largely because each time the race notices that its existence is threatened, it simply designs a new version based on the old. You could argue that this is the evolution of anti-evolution. Mind you, in the novel it was helped along by planet-sized brains to work out the details. I think one of them was called Sascha :)

I think the antibodies are expressed on the cell surface rather than simply poured out into the surroundings. So when there's a lock the cell knows and this triggers the immune cascade. So no deep mystery, just another awesome mechanism to unravel.

Thanks Derek. The cell body, various elements of a virus, can act as epitopes. There there are pathogen associated molecular patterns which requires no antibodies, rather the innate immune response has evolved to identify certain molecular patterns specific to pathogens. That I can understand but the recognition of protein fragments presented by MHC 1 and MHC 2 on the cell surfaces of our cells is bloody amazing. I suspect it is a misunderstanding, that these protein fragments, and they can be very small fragments, are released by these receptors when an immune cell glides over the cell surface. Other modes of recognition are the common findings that some intrinsic proteins, especially some heat shock proteins like hsp60, grp 78, hsp90, hsp70, and even cytoskeletal molecules like actin, can act as a danger signals when present in the extra-cellular matrix. Actin and other cytoskeletal molecules are usually all joined up so I think it must be when there are breakages that initiates the danger signal. It might be the case though that it is not these proteins themselves but rather the proteins that become attached to them that are initiating the danger signal. However hsp 60 in particular, which is expressed in huge quantities under cell stress and is strongly associated with autoimmunity, does appear in and of itself to be a danger signal.