Details

Statistics

Transhumanism in science fiction and worldbuilding

So what do you think? Is transhumanism inevitable part of any modern science fiction set in The Future (and aiming for at least moderate plausibility)? Unless it's a single idea short story, is there any way to completely avoid transhumanist themes, or are writers at the very least forced to answer questions on why their fictional world does NOT include such concepts as widespread genetic engineering, cybernetic augmentation etc.?

This is especially important when it comes to futuristic worldbuilding, I believe. With the current trends in many key technologies related to human enhancement and AI research, it seems that fictional works set in a detailed future world are increasingly pressed to include transhumanist themes.

Do you agree or disagree? Moreover, do you think this is a good or a bad thing?

Devious Comments

Is transhumanism inevitable part of any modern science fiction set in The Future (and aiming for at least moderate plausibility)?

No. There are plenty of ways that humanity's future might not see this kind of progression, or at least not in the context of a given setting and group of characters. The simplest and most extreme example is post-apocalyptic fiction, where mankind's ability to wage science is hindered or eradicated by some kind of cataclysm, be it man-made or natural. Or, you could assume your sci-fi novel is a cautionary tale against the growing trend toward anti-intellectualism and anti-science in certain parts of society.

I actually haven't seen much transhumanism in sci-fi set within a reasonable time frame (no later than 2500AD, anything else it too far in the future to be anywhere near believable). As for my setting(2335 and not very advanced for it) there is very little augmentation in common society due to costs and regulation. The most upgraded someone, other than a severe trauma victim, would be implanted neural networking to electronics as the civilian populous either doesn't have a need for or the funds for much else.

It seems unlikely to me that advanced medical technologies would remain financially unavailable to the common population hundreds of years in the future. Moreover, in an interstellar society spanning several star systems (or even just our own Solar System), regulating human enhancement technologies everywhere would be pretty much impossible.

As for really far future stories... Even they can be realistic, plausible and internally consistent. Are you familiar with the Orion's Arm universe project, for example? I find the fictional, ultra-transhumanist universe (set over ten thousand years to the future) it portrays to be far more interesting and realistic than most "mainstream" scifi universes.

There needs to be a line drawn between lifesaving medical technology and non-essential augmentation. The US government in this setting could give the USSR a run for their money in totalitarianism. Each colony (there's only a handful of earth-like planets that can actually support earth-borne life) is controlled by a an elected local government(ballots prepicked by the administration) which has its tentacles in almost every aspect of life, especially the healthcare system because of how fragile society is in a near-famine disease ridden environment.

And sorry, no. Nothing set more than a thousand years in the future is ever going to be read by me. It is too infinitely unpredictable, imagine trying to predict our present from 1000AD.

It's entirely possible, even if the end result is not even close to what we will actually have by the time we get there. It'd be foolish to limit yourself with such extermes as "I will never do X"... At least the Orion's Arm universe is strange enough - much, much stranger than our modern world would be to cavemen of 10 000 years past.

I don't see how it would be inevitable at all. You've mentioned in the comments that a futuristic setting without these features would be implausible, but this assumes that the trends we're seeing now continue for many years to come. Post-apocalyptic settings in particular seem to offer an obvious situation where people might naturally drop advanced technology in favour of low-tech alternatives that are easier to maintain and less dependent on scarce resources.

I'm actually working on something at the moment where an early disaster caused by genetic engineering prompts the governments of the world to ban it outright. The story does in fact deal with exactly the sort of themes you're describing (the narrator is an organic AI, and in terms of the actual "cast" of characters, humans are in the minority), but I feel as though this is the sort of situation that could reasonably avoid them.

In my personal universe, I choose not to really go down the route of transhumanism. I acknowledge that it exists, but augmented Humans are typically less socially accepted the more you augment yourself. I think I actually have some levels written down somewhere. Let me get them...

Here we go, these are my notes on Cyborgs, that operate on Three Levels of Enhancement.

Type 1 and Type 2 Enhanced Humans (Subtle and Moderate accordingly) are accepted in society due to the low level of physical enhancement. In my world, Type 1 enhancement can be as little as getting an organ replaced or repairing a limb with a non-organic prosthetic. This can be out of necessity such as repairing an accident leaving a severed limb or even out of choice. Organic replacements can simply be regrowing a limb, but that's not the point. Essentially, anything we have today would be considered Type 1 Enhancement since these are relatively minor things. Now, in my world, Type 2 Enhancement has a less than 50% enhancment of the body with mechanical replacements,

"Type Two cyborg usually has part or all of an entire limb replaced. This may or may not also include cybernetic enhancements to internal or external organs, such as an extension of the Pituitary gland located in the hypothalamus which regulates many things such as growth, function of sexual organs and metabolism. This could enhance height, speed, and the effectiveness or muscle buildup or a willing replacement of the arms to enhance lifting of heavy objects or coordination much better than a human brain could accomplish."

Type 2 Cyborgs are still looked upon as 'people' by people of the galaxy, but they take care to hide their enhancements to be treated better by other people. Type 3 Enhancement, Majority Enhancement. This is pretty much always willingly done and replaces nearly all or entirely all limbs with mechanical apparatuses, sensory organs are replaced with machinery, and even their brains are more mechanical than organic. As such, Type 3 cyborgs actually *shun* organic life and develop a God complex of sorts, considering fellow cyborgs much higher than baselines.

So, mechanical transhumanism isn't really much of an acceptable thing to do in my work. On the other hand, medical technology has expanded by the time my story takes place through genetic manipulation and screening. Your average Human can expect a lifespan of about 200 years, but with proper living, you could live even longer. I never gave an age for the World's Oldest Person in my work though. Hmm... perhaps I should do that. While this could be viewed as Transhumanism in some sense, people in the 33rd Century are still considered no less or more Human than we are today, and thanks to things like suspended animation and FTL drives, there's no need for enhancement for interstellar voyages.

AIs can also be a form of transhumanism, but there's one teensy-weensy catch: you have to be dead to make a 'Smart AI', as opposed to a 'Dumb AI' who is written like any other computer program. Organic matter in the brains of recently deceased donors makes the best material to construct a Smart AI, who can act as any other person and make quantum calculations up to trillions of times a second. They are people in any sense of the word, but not quite the person who died. Your consciousness does not get passed on upon activation, but rather, your memories *can* exist in the AI's personalities. And even then, you have a limited lifespan as you can actually *think* yourself to death; this lifespan can be as short as seven years before Rampancy, analagous to madness or dementia, takes place. By now, you could realize that this is the description of a Halo AI. You're not wrong. This particular series I'm writing at the moment is an extrapolation of the Halo universe, seven hundred years after the games where I can make changes without threatening the plot line, but the elements here are very much my own, and thus I can move them to a new series.

But in the end, it seems that transhumanism is possible, but just not practical. You can do so, but you risk being ostracized, hated, feared by people. Even then, you have to maintain yourself in many more ways than your organic body would need, and despite all this, death is still a very real threat that will be hard to avoid entirely. And even if you could do it, would you want to? I don't know.

As for natural evolution, well, the story series doesn't take place *that* far ahead in the future. So evolution beyond something Human doesn't seem very likely without some 'outside' intervention. In the end, at least in my series, it's better to be just the way you are. Live in peace with technology, but they're oil and water in some cases. Maybe I'm just a Humanist, but that's just the way I roll here.

You just described plenty of transhumanist themes - your universe is clearly filled with them, with somewhat common cybernetics, vastly increased lifespans and mind upload AIs... Although I must say I find statements like "heavily enhanced people will always develop a god complex" or "smart AIs will always crash after a certain period of time" to be rather unrealistic and implausible plot devices to avoid the (at this time) seemingly inevitable transhuman future.

As for the galactic society shunning heavily enhanced cyborgs - is there any reason as to why the hated cyborgs haven't formed their own hyper-advanced society vastly outperforming the slow-minded meatbags in pretty much every aspect of life? (I'm a wannabe writer as well, and this is exactly what happened in my universe - the unenhanced majority found itself steamrolled by technologically, intellectually and physically superior transhuman minority in the early 22nd century.)

Well, think about it: Enhancements that usually leave the body mostly machine are done because of only one reason: it provides them better experiences than any organic body could have done, and technically immortality, provided they upkeep themselves. It's done because they want to be superior. They're done because the individuals in question don't want to be Human anymore. They're done because they want to be more than Human.

You're telling me that if a person, right now, chooses to enhance himself to the point where he is half-machine or more, of his own volition, that he wouldn't hold it over someone else? You're telling me that the euphoria of doing something his old body wouldn't have allowed wouldn't make him think he's superior? No, those are exactly the reasons why they would do something like that, to be superior. To be better.

And Smart AIs do deteriorate because they are based on Human neural pathways, which makes them utterly far more intelligent than any Human could hope to be. They are people. They have feelings, or worst case scenario, close approximation of feelings - I don't know which. But because the smartest of the smart AIs cannot get rid of their data, their minds get filled up. They think because they have to. They think because if they don't, they would go mad. A second may seem like a very short time to you, but to an AI, it's an eternity. Trillions of functions neglected even for a second can cause a yawing void in their functions. AIs are built for functions, be it logistical, military, or exploration. AIs cannot create themselves and have to rely on Human donors in order to simply be born. That data doesn't go anywhere either. It piles up, and cannot be forgotten. Since these AIs are based on organic tissue, their pathways actually overlap and kill them. A good approximation would be your lungs forgetting how to breathe. Smart AIs think themselves to death.

Ah, ah. That's not true. Seven years is the average lifespan of an AI. Older ones have been known to exist, but the longer they do, the more chance of Rampancy to develop. Approaching such a stage of decay in an AI's core, they view their makers in contempt because of their inferior brainpower. AIs can only avoid such a state by achieving 'metastability'. What that is, I do not know, since there is no clear definition, only that if it were achieved, it would be 'the holy grail of cybernetic research'. There have been theorized individuals, but none confirmed. Any that were thought to have achieved this stage have died in one way or another. The only other thing about AIs is that they are compassionate as well, as evidenced by something called The Assembly, an ultra secret group of AI individuals that wish to safeguard Humanity's continued existence "for the next two hundred thousand years", enough time for actual evolution to take place, but then again, the Assembly might not be of Human make, instead created by the Forerunners, an ancient alien civilization.

And you're assuming that all cyborgs are shunned. No, Minor and Moderately enhanced cyborgs are accepted by society. Their enhancements may be invisible on the surface or may be few and far between on the outside. The only ones NOT accepted are some Majority enhanced individuals that have adopted the 'inferior fleshling' viewpoint. These individuals do NOT consider themselves Human. As consequence, they don't consider themselves alien either, so they allow others to join. Let me pull it up again:

"There are actually races that refuse to have their bodies enhanced with prosthetics, the most notable being the Lyshad (See Article LYSHAD) because of their deep religious beliefs that such enhancements make them 'false' or 'untrue'. Another example is the Shi'Kri'Lash (See Article SHI'KRI'LASH) who are bound by tribal instincts and views, who also find such enhancements as blasphemous, despite having the ability to have space travel long before their expected time."

-Galactic Codex, Chapter 16

"REASON TO FEAR

The question goes through every cyborg's head when they encounter a frightened Baseline 'Why do they hate me?' The answer lies in the appearance first off. Many sentient races fear that which is different. According to the human Dr. Keith Price-Robinson (B.S., PhD, MFS, FRS) of Sussex, England, and the Gallvente professor Thayor Kimbal Hasbrask Lout Quyn (PhD, FIS (Fellow of the Intergalactic Society), QeD) in 2615 that the appearance of a creature that is modified past the point where [logically speaking] they are no longer an organic life form, non-augmented individuals see this as the "shadow" or "nightmare" of existence, living as the exact physical opposite of corporeal life but still retaining animate activity.

Price-Robinson also stated that there was no real reason to fear the creatures which were called Cyborgs because in essence, they were still completely organic but with a different physical shell, and that it all depended on the mind and whether they could deduce reasoning or compassion. Despite the signs that cyborgs still have plenty of emotion, society still is frightened by them simply because they are different. Fear seems to have gotten to the heads of some augmented organics. Out of sheer madness, some have lost all control of reason and basically do become machine because with the mind shattered, the robotic body of algorithms and subroutines takes over.

These are the individuals who orchestrated the Cyborg Uprising of 2718, a five month conflict in which a group of Type 2 and Type 3 cyborgs lobbied for the right to be citizens. The leader of this Uprising, a Type Three cyborg named Johann Eberstark led the charge. Their request was refused and they went on a spree of destruction which eventually was defeated. Seventy five years later, the United Nations agreed that Type Two Cyborgs had the right to be citizens but Type Three Cyborgs were still denied the right. To this day, the Unified Earth Government has not agreed to rectify this decree, much to sadness and frustration to Type 3s still in control of their minds and their emotions."

-Galactic Codex, Chapter 16

You're assuming that everyone will go along with these ideas. You're assuming that people will just willingly toss aside their humanity to become something else, many, upon many of which will consider themselves frightening. You're essentially describing the plot of Terminator, except with cyborgs. In my world, Cyborgs are just regular people. Not geniuses. Their brilliance is not determined by their computations. What's left of their organic brain does. They are scavengers. They are opposed, and quite frankly can be destroyed easily if they were to rise, as you have a whole galaxy that wouldn't hesitate to crush a rebellion. Cyborgs realize that they need the organics. They can make their own parts, but they need to trade like any other civilization, and if they can't trade, they'll die, just as any other organism. Try starting civilization from scratch again. Even with a fancy augmented body, you'll never get back off the ground without commerce with other civilizations.

Again, these people weren't scientists. They were normal people. Soldiers, construction workers, Civilians who wanted to be better than they were. They wouldn't know how to start building AIs (Which by the way need whole organic brains which Majority Enhancement individuals lack. Enhancement movements like the Uprising sparks new Nazism among Cyborgs that move to destroy 'Inferiors', and mark my words, if this ever does come to pass, this WILL come up. Those who have lost whatever Humanity they have left destroy the image of those who want to live in peace with Baselines, and they DO want peace. But as long as runaway modification without reasonable borders put in place exist, that won't happen, and they will be feared, and they will have to live far away to live on their own to protect themselves, and others whom they constantly seek friendship from. There needs to be a point where we can recognize where Humanity ends and Machine begins.

Boy that was long winded, but this is NOT a straightforward issue. My universe, which is far down the path from ours (circa 3239 currently) operates on levels we don't have today, so we probably can't connect with them that easily.

As for today, we live in no future. We live in today. Unless this becomes material; unless this becomes an actual fact that our citizens have to make a choice of, this is nothing but science fiction. Seemingly inevitable does not mean certain. Ethics will still be in place of such proceedings. I don't expect anything of this nature to be an issue within my lifetime or the next generation's.

Since when has the very common human aspiration to become better, smarter, faster and stronger been a bad thing automatically leading to what is basically narcissistic sociopathy? People have always used technology to enhance their lives - building technology inside our bodies is just the next step on that very same road.

And I could easily imagine myself living among cyborgs without being scared shitless because of their unusual appearance (though I would of course attempt to become one of them). And I am hardly the only one; I doubt humans have any built-in aversion to half-mechanical beings, seeing that we have never evolved in an environment where such beings would have been dangerous to our ancestors.

And there is absolutely no scientific evidence that a sufficiently long-lived human brain would automatically shut down due to "overload" of synaptic connections... As such, I doubt an AI based on human brain would have any similar problems - and even if they did, why couldn't they just keep expanding their memory strata to store more memories and experiences as their complexity increases?

You seem to be very invested in a future where this becomes a reality.

I disagree. The only people that would willingly do this would be a very small amount of people dependent on 'inferior' Humans to build their bodies and 'inferior' Humans to maintain them. 'Inferior' Human Doctors to ensure that their bodies don't reject their limbs. And what if that happens? Oh well, looks like You just severed your leg for no reason! There's a difference between bettering one's self and throwing away Humanity because you think it hinders you. Oh, might as well lop off my arm and get a new one, because it's better.

With seven billion Humans on this planet, there's no question that any sort of cyborg revolution will be very small and very containable should they try to take over society. If cyborgs do exist in numbers that rival a country, it will be in a minority. Since there are no Half-mechanical, extensively modified creatures to the extent of the Outsiders, there is no way to test this theory.

Since there aren't any AIs that can yet be determined to act as a real person in any sense of the word, and since no form of organic to mechanical uploading actually exists, coupled in with the fact that we do not understand the Human mind, there's no way to disprove or prove it in any way. An AI is consciously aware of its surroundings every picosecond its active takes its toll on a brain. The brain itself is not used. as a matter of fact, the brain is destroyed during the uploading so that its neural pathways can be mapped in the first place. It's the only way to successfully map neural connections with any reasonable degree of accuracy, which is why recently dead tissue is used. Mind upload is not possible in this universe because you have to be dead to do it in the first place, it obliterates the brain, secondly, and it would kill you anyway if you did while you were alive. AIs are stored in memory crystal, which through a Reimann Matrix has the approximation of Human synaptic functions, coupled in with the AI's processors. It's fixed, it cannot grow because to modify it would be fatal to the AI, or possibly cause damage. Because it's fixed, the AI has to cut corners as it grows older to accommodate more thinking space, and in doing so does damage to itself. The crystal is the AI's Data Processor and without it, there is no AI.

Also, the UNSC doesn't know how AIs actually work as they're copies of the Human mind, which even by the 26th Century onward to the 33rd Century, we don't understand because of the complexity and intricate details that we just can't locate. So we aren't creating AIs from scratch. We're only copying biological material and essentially creating an analogue to it. We don't know how to produce something like this from scratch. Dumb AIs are advanced versions of what we can do today. They can only know something in a very strict area of study, such as infrastructure, policing, and factory oversight. An individual must score a high percentile in order to become a donor, so not everyone can do this, and once you're dead, you're dead. That holographic representation is not you, even though it has residual memories, it will never be the person it is based off.

In this fictional world, and I'm very sure in this real one, there will be no escaping your humanity, no matter how hard some people will want to try.

The term "humanity" alone is an abstraction, an empty word with no real meaning. And I doubt any abstract concept lost when replacing limbs and organs would be worth retaining... And no, I still don't see why enhancing oneself would automatically make one a sociopath with a god complex - one can still feel compassion towards others with less capabilities; you don't see much sociopathic killers among the top scientists and athletes, now do you?

It is quite likely that should human enhancement technologies become widely available, an increasingly large number of population would gradually embrace them - just like any new technologies with vast impact on the society (like computers and mobile phones, for example). Sure, there would be bioluddites opposed to any form of human enhancement, and while it certainly would their right not to get enhanced, I doubt they could halt the progress for everyone else (though granted, it is in the nature of conservatives to try and force their beliefs and ideals upon others)... Eventually, you'd have cybernetically and genetically enhanced people tended by similarly enhanced doctors - hopefully without any bloodshed. Should there ever be an actual war between humans and transhumans, the most likely aggressors will be the unmodified, fearfully ignorant "normal" humans. Hopefully, though, it won't ever come to that in the real world. (And if there won't be any wars, then the unenhanced population will be eventually outperformed by the transhumans and left as far behind as chimpanzees are behind humans in intelligence and technological ability.)

As for true AIs... Like you said, since we don't have them yet, their nature is very much open to speculation in science fiction. Nevertheless, I will not limit myself with entirely fictional reasons with no true scientific basis. If a human brain - essentially a very inefficient biological computer - can achieve sentience and sapience, there should be absolutely no physical reason why an AI running in an artificial, self-evolving synaptic network could not surpass the organic mind in every way and keep on working indefinitely.

Oh, something I forgot to mention, It actually costs money to enhance yourself as well. You can't just become a cyborg willy-nilly. You need to be accepted into their society, and to do that, you need to be sufficiently advanced as well. That costs a bit of cash, so not everyone can do it either.

And there is no hostility either. Not openly anymore. A good portion of Cyborgs actually don't mind Baseline Humans. Some are downright peaceful. Again, the only ones that cause problems are those who have given in to their desire to become superior that ruin it for everyone. And the other cyborgs know it.

The closest Human analogue would be the Spartans, enhanced very little in terms of cybernetics, but mostly genetic. A bit more powerful physically, but still very much Human in their thoughts, and actions. Though they are strictly a military unit and not a group of the populace. But it's implied that down the road all Humans will evolve to their level.

Sure, baseline Humanity isn't the strongest of of the races in this universe physically. There are much stronger alien races here like the Sangheili, who are much more powerful, but Baseline Humanity has the upper hand in terms of politics and interstellar relationships. Cyborgs may be above Humanity in some respects, but there are far more powerful things in this universe than them. And I will elaborate on that in my next story, whenever that will be.

I'm going to stop here to say that you probably have the wrong idea of what current AI research is (as well as most of the world for that matter.) Current research is for making a search service (i.e. Google and Netflix suggestions) that is relevent for individuals as well as being able to recognize patterns in a set field. I don't mean to attack you or anything, but I've met plenty of peole who seem to think that the study of AI is about making the next GLaDOS or something versus looking at what Google does with their search engine and how to optimize it.

As for the topic at hand, I don't really see a trend pushing to the point that transhumanism should be discussed in any sci-fi story. Exist? Sure, much in the same way that energy based weapons are in sci-fi. But I wouldn't make it common without it being openly discussed (see Deus Ex: Human Revolution) or without a good reason for it to be fairly common (see: FullMetal Alchemist manga or the Brotherhood anime where plenty of characters state how they lost a limb in the war.)

Oh, I have an excellent idea of modern state of computer and AI research. True, we don't have any real electronic sentience now (though perhaps surprisingly, there have been a few quite good imitations), but we will get there, eventually. You couldn't have built Internet with 1950's computers, either, but just look at what we have now. Physically, there is absolutely no reason as to why an artificially sapient being could not exists.

As for science fiction stories set in Teh Future: I just don't buy it that in a story set in the latter half of the 21st century (not to mention later), there won't be an incrasing number of technologically/genetically augmented people with capabilities superior to those of "normal" humans walking around.

If a writer decides to ignore current technological and sociological trends - or even the laws of physics (for the Rule of Cool) - and then claim that his or her fiction is a plausible, possible representation of our future, I'd have to claim the writer is lying.

It depends on what you mean by transhumanism. Personally, I thought transhumanism was a particular movement or worldview related to humanism where humans are able to transcend natural limitations through the use of technology. Like basic humanism, the view here is essentially positive. Science fiction as a whole has often addressed the issue of humans using technology to augment or transcend, but it has not always viewed these possibilities in a positive light. Therefore, while science fiction might potentially be transhumanistic or grapple with the issue of transhumanism, it is not necessarily transhumanistic.

The other thing is that a lot of science fiction is more defined by use of setting and doesn't always seriously grapple with the issues mentioned. Or it's primary goals are oriented toward other issues. Post-apocalyptic SF and certain dystopias that take place in societies bereft of advanced technologies are also not going to grapple with these particular issues at all.

That isn't how people involved with the movement define it: [link]It's defined by a particular position on what human enhancement through technology means to the species and actually, yeah, that does relate to humanism.

First, a Wikipedia article is hardly the most realiable source of infomation you can find, and even that article presents multiple views on the matter, if you just read more than the initial paragraphs.

Two, I consider myself a transhumanist, so I guess I am "involved" with the IDEA (not a "movement"), yet I do not share all ideas and opinions with such visible figures as Nick Bostrom and Ray Kurzweil.

Wikipedia is good enough as a general resource. I have read about transhumanism in the past from other sources, so that is hardly the only basis on which I'm staking my claim, and I am aware there are varying views on transhumanism. No worldview, movement, whatever is legion. Even so, for the sake of defining our terms, we have to make a few generalizations.

And when I said positive, perhaps I should have said "progressive." Transhumanism, like humanism, believes that progress of humanity is possibility through our own effort. At least, this is what I've always been given to believe. How do *you* define it? Just augmentation of human capacity through technology?

OK, but my initial point here is that not all SF has treated "progress" in a positive light.

Most of Philip K. Dick's work, for example, while dealing with the kind of issues you're referring to, sees them as ultimate damaging our humanity. He wasn't necessarily versed in Baudrillard and other post-structuralist French guys, but his work can be said to address how technology creates a simulacrum supplanting reality and divorcing us from experience of the real.

Bradbury also wrote a number of stories in which technology created to improve and augment life contributes to the destruction of human knowledge and the dulling of emotion.

Now, I wouldn't go so far as to claim this kind of stuff is anti-humanist or anti-transhumanist, but claiming it is transhumanistic just muddies the definition of transhumanism. It might be more proper to say that such work contends with the possibility of transhumanism and serves a critical function.

Words of wisdom, there. "Progress" - being the abstract concept it is - is essentially neutral. It can have both positive and negatice consequences. In my opinion, the exact point of science fiction is to explore these possibilities.

Besides, progress - in good and in bad - also seems pretty much inevitable. "We do what we must because we can..."

Unless we are talking about a work of fiction set in the very near future (a decade or two at the most), can the world described in the novel feel plausible and believable without any transhumanist themes?

If you read a scifi novel set in the 2000-teens, you will find that it has probably not aged well, as it's very rare to find references to such common modern day technologies as the Internet or mobile phones.

Yep, as I've said, I'm not really sure what transhumanism means. But if body modification counts as transhumanism, then I guess I've got it and so does any modern day story involving lady gaga horns and things like that.