Aren't transhumanists committing the Jurassic Park fallacy?

Given that even the smallest disruption or perturbation in a complex system can be amplified, and given that there are still so many important aspects of the mind-body interaction in human medicine, it seems like moving forward with the technological enhancement of human beings—ranging from putting computers inside us to putting us inside computers—is to court the same kind of disasters we always get when we tinker with things we don't yet understand.

Apr 23 2011:
Complex systems abound. We make changes every day and many of the consequences are unforeseeable. Never the less, most systems rock on... e.g. we alter our body chemistry when we eat and it usually turns out OK.

Complex systems have emergent properties that often act to help the system adapt to and exploit changes. Its true that you can throw a system out of equilibrium and/or cause a phase change sometimes, but fortunately for us all, that doesn't occur as often. More typically, changes help the system grow and evolve.

It is inherently impossible to know all the ways any complex system will change as a result of some alteration. Paradoxically, the lack of change also has consequences. Not making a change could turn out to be a kind of sin of omission.

In theory, we could "leave well enough alone" but, realistically, that just isn't going to happen. Short of global catastrophe, technological innovation will continue apace. In fact, it will continue to accelerate. Failing to adapt socially to this reality posses one of the greatest dangers. We need innovation to cope with the changes we've already made.

Fear of the unknown is natural ( at least for the time being ), but it is not inherently wise. Caution isn't always advisable. Sometimes, what appear to be risky behaviors are in fact the very things that lead to growth, happiness, and prosperity.

We must continue to do what we've always done... make educated guesses and try to take some precautions that mitigate anticipated negative outcomes. However, that must occur in parallel with the process of experimentation and discovery. It can not all be done in advance, since we simply can't anticipate every threat.

Apr 23 2011:
So on your reading, it sounds as if there are no lines too dangerous to be crossed, and never a point that is too soon to cross it. My argument is primarily about the latter, with a weighting of variables based on the first.

Yes, sometimes what appear to be risky behaviors turn out well, the point is that sometimes they turn out to be absurdly destructive.

Comment deleted

May 8 2011:
However, if we allow ourselves complacency in matters of progress, soon we would be significantly outmatched by those who at the moment have very little to loose and willing to perform high gain research. If you don't follow progress, you will be left behind.

Apr 22 2011:
I think this worry could be applied to any scientific or technological progress of any kind - and given the amazing success of SCIENCE and TECH as they get more and more complicated, your worries are completely unfounded.

Computers, more complicated over time? Yes. But we learn how to work with them and create and fix them at the same pace, so they have become a crucial part of your life every day and have taken many tasks away that people like you probably fretted about to begin with.

here are some amazingly important tasks done by technology that has gotten "more and more complicated" over time,off the top of my head:
Commercial Jet Autopilot
Power Plant Management systems
Guided missiles
Computer Security

Is your concern that introducing the biological aspect will create this "less than 100%" control? By the time they figure out HOW this technology works and how to implement it (it won't be that soon) they will be a lot closer to 100% by necessity - in other words for the Singularity to work we will certainly have to be much closer to 100% knowledge of both the biological and technical aspect.

As they get closer to mapping the brain, the proteome etc I think the first opportunities for transhumanism will show themselves as the simplest places that we understand the most and we will work from there.

Apr 22 2011:
Besides "guided missiles" which Einstein forewarned would be the result of science in war and that is not what science (even computer science) should be meant for... everything else is nicely put!

Apr 22 2011:
Yes, like guided missiles, some genies can never be put back in the bottle. How do you propose we stop governments from using your superhero technology (mentioned below) to build killing machines on a scale the likes of which have never been seen? Will our idealism about what science is 'supposed' to be used for save us?

Apr 22 2011:
Interesting line of thought, but none of your examples reflect the manipulation of existing, permutable, adaptable organic systems. The organic systems aspect is exactly my point, and the rampant debacle of gmo's is, in contrast, a relevant sample case. Once we start attempting things like elective genetic selection/tinkering and efforts at cybernetic or nanotechnological enhancements, we are talking about a whole different scale.

Be clear, though, I'm not arguing in essence about whether we should eventually consider implementing some of those adaptations or enhancements. I am arguing that we are nowhere near that point yet.

The current contamination of the region around the nuclear power plant in Japan is another interesting case in point. We can 'think' we have adequate safety nets in place all we want. Sure, humongous earthquakes and tsunamis are rare, and lots of nuclear plants are online and NOT causing disasters. The point holds that in a sufficiently complex system, we can never account for all the factors, and if we get it wrong in an organic system, we could all be screwed. At least the nuclear plant is still a mechanical system.

Shall we talk about how badly the "successful" world of science and technological infrastructure will become if something that even "acts" like an organic system - in this example the Stuxnet virus - gets loose and starts to spread, mutate, and cross-pollinate with other computer code?

Apr 26 2011:
During Harvey's talk my eyes grew larger.. F.ex. I am pro stamcell research for regrowing organs and what not, but to alter DNA to improve (= so not the right word) the human body is, simply put: ignorant.

I feel that the people with the quality to create such a change in biology, cannot fully comprehend the means of human growth.
I think that we are near an era, in which technology is profitable for us to regain our view towards life, and therefrom our position in it.
I believe, that what we are lacking in the now/future, is already in our (idle) capacity. We should not forge/force - growth that we are not willing to undertake - via unnatural ways.

I wonder how long do we, human animals, still desperately seek a scientific/technological solution to the fundamental global challenge: we are way too many, we consume way too much and we are way to ignorant - in order to sustain a future for all species on this planet.

This said, I truly doubt that living longer and healthier will show us a way out of the apocalyptic equation.

May 8 2011:
I think the original point is whether technology will outrun our wisdom, and what would happen then. Scientists are too blinded by how to create generations of transhumanists, but failed to see how that society, full of gaps - including the modern 'genetic gap' - would function. Who's in charge? The problem is first, the brave new world undermines society's sense of-value, when talents, intelligence and creativity are not acquired by self-training, but the mass are not Delta and we're not conditioned to accept that disparity, which threatens social structure. Back to Richardson's point, this scary future will not happen if we don't go that fast in creating cyborgs but upgrade the general standards of human beings first. Second, while accelerating evolution, we are building a society without history, without the trials-and-errs of its own, whose collapse would be disastrous. By compressing the course of evolution into a handful of years, I doubt, we are jumping out of the current equipoise.

Time saving equipment allows us to improve ourselves in other ways. It is highly unlikely that everybody would end up with equal abilities, mainly because people have different aspirations and would focus on different things.

May 3 2011:
I think transhumanists are mislead people. It is a kind of religion because one places his or her faith into a potential future which is not actualized through fact. It denies the facticity of our current existence and replaces it with fabricated ambitions.

If such a future indeed comes to pass that would be great, maybe technology can solve all our problems. But maybe we need to work on actualizing such a future now as oppossed to placing our faith in it!!

although they are mislead you find it great if those ideals were taking place?

Calling transhumanism a religion is like calling humanitarianism a religion. They are systems in which you can put faith but in no way are they misguided. Our current existence is what transhumanism wants to improve through technology and understanding. How can we get to point B from A without understanding B first?

The actual work to get us to point B is vast, one must consider what is preventing such before performing. Perhaps if these methods were more publicized they would gain more support and interest. Faith is not a bad thing, it keeps you on a course. The faith is only bad if the source in which faith is coming from is poor.

Putting transhumanism on any plateau with an Abrahamic Religion is ridiculous to do.

Eastern religions accept science to only benefit their faith systems, so there faith systems inspire science. these are not counter points nor opponents. They have nothing to do with one another. Except faith can drive great things if focused properly, when it isn't focused properly comes fundamentalism not acceptance of change.

I think you need to read more into transhumanism because that is going to be one of the sources in which truly get us to point B in the world.

May 8 2011:
How can you have a fabricated ambition? Ambition is by definition something you want to get done in the future. Besides, our current facilities are acknowledged, but are not considered end state. Is self-improvement wrong because you're not perfect at the moment?

None of us are perfect, but if we become better, is that not an improvement? And didn't this discussion start by assuming improvements are wrong because we have an imperfect understanding of potential risks and benefits involved? That sounds like an implicit perfect solution fallacy.

May 10 2011:
Yes the future is a fabricated idea. It is not something that which is factual. Self improvement is not right or wrong, I don't wanna make any moral judgements on this topic. I'm just saying that ideas about our future can be authenric or misleading.

If we currently think and behave like humans it's not authentic to posit that one day we will somehow surpass our own body and minds. Nicholas talked about immortality, how would immortals survive in a world with limited resources? This is our facticity that limits our capacity to make certain futures plausible.

May 3 2011:
Wondering whether Hippocrates was considered as what you call Tanshumanists of his time.

To my feeling Eric , there was always a big difference between the speed of "scientific innovation" vs "moral , ethical" evolutionary development of society , so we are always in a dilemma with these two.

Scientific innovation many times brough Revolutionary Change while

"Moral & Ethical" standards always went through a slow paced Evolutionary acceptance kid of state of those revolutionary change that innovation bought.

The ultimate literature surrounding transhumanism, is indeed the idea of immorality. through science and technology we can understand the means of living forever. Now living inside computers is complex but it is not the main focus of the ideology at all merely a suggestive philosophy. However I found the philosophy around understanding the cell and continuing the means to manipulate it (not just copy) to benefit humanity is great and I cannot see too many negatives with that!

Let's put it this way, instead of putting a robotics arm as a replacement we grow you a new arm and give you robotic eyes that allow night vision. That is more transhumanism than cyber space theories. Transhumanist want to only give benefits to the world through intelligence.

Indeed there are trial and errors in everything that involves discovery! Also if we could really bring back dinosaurs, which I think we would if we could, I do not think it would be purely in the hands of some rich tycoon alone, I am sure governments would be involved.

I like this thread because 1. it is dictating how much science fiction is more and more a reality each day and 2. will give those who consider themselves humanitarians and technology supporters a new bunch of philosophies to consider!

You're right about there always being dangers, I guess my concern is that our tendency as a science-enabled species is to get ahead of ourselves and roll out new life/world altering tech before we have the safety nets in place. Our repeated failures to predict and contain genetically modified plant species should serve as (but hasn't been) a good example of a recent instance.

Some of the biggest disasters in history are some of the most valued lessons we learn or should learn.

I find it fascinating that with all our advances the most basic of building structures is still an open subject manner. Imagine the potential of manipulating a cell to being whatever we wanted? People would be the super heroes that art depicts throughout time.

In any system complex enough to be capable of permutation, variation, and recombination, predictability and containment are both less than 100%, and their distance from 100% is approximated by multiplying our ignorance factor of each element.

The Ignorance Factor would be (100% - how completely we understand an item/entity/mechanism/operation). If we only understand about 40% of how neuroplasticity allows the brain to re-wire for adaptations, and we only understand implant rejection to about 85%, then the effects on the system of a computerized brain implant would be a function of multiplying those (and a number of others).

My point is that we are a long way from 100% understanding on an awful lot of elements of the human system, so starting to plug in technological enhancements brings an inordinate risk value that only hubris would think we could predict or contain.