Many years ago I was initiated into Daoism by a teacher who came from China. I've spent many years learning since then and would like to introduce anyone interested into my odd little life trying to practice this ancient wisdom tradition in a modern urban setting.

Sunday, March 6, 2016

In my past blog post I set the stage for the hypothesis outlined in Susan Schneider and Seth Shostak's Huffington Post article Goodbye, Little Green Men. That is, the reason why we haven't found any extra-terrestrial civilizations is because once a species becomes capable of looking, it quickly evolves into something that would no longer be recognizable as such. Only intelligences---like ours---that are on the beginning of this trajectory would be seen as such. And that is a very narrow window of "visibility".

Ultimately, what I am talking about is the "Technological Singularity". This is the idea that there will come a point in the development of artificial intelligence when computers will start designing themselves in substantive ways. Since computers are so much faster than the human brain, this means that a fourth evolutionary race will begin (beyond DNA, culture, and technology/human hybrids like the Internet) as the new computers designed by the old ones design even better ones that will in turn design even better ones until we almost instantly reach a point that where humanity is left in the dust. What is left will probably function at a speed and in ways that would be incomprehensible to the Search for Extra-Terrestrial Intelligence (SETI) machinery that is currently in place.

&&&&

I'm not about to get caught up in technical issues that I know almost nothing about, but I do think it might be interesting to discuss some of the ways that science fiction writers have thought about what a transition away from purely biological thinking processes to technological ones would look like. There are a huge number of examples. For example, D. F. Jone's novel Collossusdeals with a cold-war era super computer that develops sentience and uses it's control of the nuclear arsenal to blackmail human society. In Harlan Ellison's short story to "I Have No Mouth and I Must Scream", this concept is expanded into a world where the AI goes beyond wanting to control the human race into a diabolical hatred of a small remnant of humanity which it keeps alive in order to torture. Star Trek played with the idea by introducing the Borg, who were an sort of "super Internet" of creatures from various civilizations that had been "assimilated" into a digital collective consciousness. And Stargate SG1 introduced the "replicators", which started off as a child's toy that was able to reproduce itself and became like a swarm of locusts spreading from one advanced civilization to another across two galaxies where it feeds on their technology and raw materials in order to reproduce (or, "replicate".)

The Replicators from Stargate

It's hard to choose, but the first example that I want to discuss in some detail comes from Ursula Le Guin's book Always Coming Home. This novel is something of an anthropological treatise on a future, enviro-utopian society. In it, intelligence has split into two streams: human and machine. Human society is fragmented into small tribal societies that have all created specific cultures that are adapted to the specific environment that they live in. There is some use of technology, but it is quite minimal and very much what is known as "appropriate tech".

Ursula Le Guin

Artificial intelligence, on the other hand, exists totally separated in the underground and, to a certain extent, in outer space. The only way it interfaces with humanity is through the creation and maintenance of computer terminals in every community. These terminals allow human beings to communicate with other humans over long distances and to also access the sum total of knowledge that both humanity and AI have been able to accumulate. In this vision, artificial and human intelligence have become what Stephen Jay Gould would call "non-overlapping magisteria".

In effect, humanity appears to end up living totally at the sufferance of an artificial intelligence that benignly neglects them. While Le Guin never really works through the implications and makes this explicit, her future humanity is living in the equivalent of a nature preserve or game park. (Please don't feed the bears!)

&&&&

The next example I came across was from Frederick Pohl'sHeechee books. In this future, humanity develops both artificial intelligence and the ability to download human memory and "personality" into data storage. This creates both a type of immortality and a temporal disconnect between the living and the dead. The disconnect comes about because computers are able to process information so much faster than brains can that dead humans can accomplish in seconds what would require living ones months or even years to achieve.

This disconnect between living and machine stored intelligence creates a tension in the series of novels that gets settled through the plot device of the discovery of an intelligence that only exists as data, the "Assassins" or the "Foe". They are seen as the enemy of "meat" intelligence because they supposedly wipe out all intelligent life that holds the promise of evolving to become an eventual competitor and because they are attempting to change the nature of the universe to make it more

Frederick Pohl

compatible with energy-based life forms instead of material ones. Rather anti-climatically, this tension gets resolved once the "Assassins" learn about human artificial intelligence and machine storage of human personality. It turns out that human beings are evolving into energy-based entities instead of material, so they are no longer enemies and should be tolerated. Since the long term Assassin project of changing the entire universe has billions of years to proceed, humans decide that there is lots of time to evolve to a point where this will no longer be a problem.

I have a problem with Pohl's description of machine-stored humanity because I don't think he's really come to terms with the complexity of human consciousness.

The first thing to remember is that what we "are" is not a "brain in a bottle". Instead, we are firmly rooted in a specific body. This has various ramifications. First of all, it's important to understand that our hormones regulate a great many things like emotions. Pohl's machine stored humans indulge in a lot of things like eating fancy meals and having sex that have a great deal to do with the bodies that they have given up. Without sex organs, why would they have any sex drive? Of course, it would be possible to write subroutines in the stored personalities that would create simulated appetites of all sorts, but why would they do so? More importantly, even if these stored people started out with virtual bodies, why would they want to keep them in the same state as in material existence?

Even beyond things like sex and eating, human beings are governed by physical limitations. For example, I can only see in one direction and only one viewpoint at a time. No such limitation should exist for a machine stored intelligence. What would it be like to see a full 180 degree viewpoint at all times? And why stop there? What would it be like to be able to see an entire object front, back, sideways, up and down all at once? Again, why stop there? What would it be like to see an object simultaneously over a period of time? Pohl doesn't even begin to scratch the surface of how incredibly alien it could be to live as a stored intelligence. Perhaps something of humanity could eventually be stored in computers, but I doubt if it would be in any way shape or form recognizable as a human being.

Of course, this is the point that the authors of "Goodbye Little Green Men" were getting at. The fact that we have the technological ability to look means that we will be quickly evolving into something that would no longer be recognizable as being life at all---.

&&&&

The last science fiction novel that I want to discuss of that explicitly deals with the issue is Linda Nagata's The Red: Into First Light. This story involves an emergent Artificial Intelligence (AI) that comes out marketing software that is designed to track and anticipate the desires of people browsing the Internet.

In Nagata's world, modern society has devolved into an almost total plutocracy dominated by the Military Industrial Complex. A small number of oligarchs (informally known as "dragons") control and arms industry and armies of mercenaries, and manipulate the American government to create endless brush fire wars in the Third World primarily as a means for sustaining their corporate

Linda Nagata

empires. One element of this system are the creation of "linked soldiers" who are managed in combat through the use of real-time communications systems directly connected to their brains. This allows the chain of command to be able to see and hear everything each individual front line soldier does. This allows them to co-ordinate their activities in a way that makes them far superior fighters than any other army is able to do. It also allows the soldiers to survive combat stress far beyond that of anyone else, because the hormonal structure of the brain is manipulated to prevent debilitating combat fatigue or Post Traumatic Stress Disorder. An added advantage is that this system creates a lot of visual footage that can be edited together and made into a very popular "reality tv" show that provides useful propaganda for the government.

The protagonist of the story, Lt. James Shelley, starts finding that he is being given a subtle "advantage" that allows him to avoid death by "intuitively" avoiding specific situations or "anticipating" problems. Eventually, all the members of his squad begin to notice and they realize that there is something manipulating them from "on high". They understand that this sort of thing is impossible for human beings to do, so they realize that what is happening is some sort of emergent AI is manipulating them for some reason. They call it "the Red", and develop a strange ambivalent relationship to it----at one time scared of it, but beginning to rely on it for survival.

As the novel series proceeds (I'm only half way through the second of the three part series), the "movers and shakers" either try to destroy the Red (through an attack on the server farms where it lives) or accommodate themselves to it by attempting to anticipating its desires and making themselves useful to it. In effect, it just becomes another player in a complex world where "little people" like Shelley and his crew exist as little more than chess pieces. I haven't finished the novel series, but it strikes me that this is a perfectly logic way of looking at AI---just another part of the mix, just like the Emperor was to your average Chinese peasant or Roman slave, or, Bill Gates is to someone who works at McDonald's flipping burgers. A semi divine part of the landscape that one hopes either ignores you or finds you of some use.

&&&&

Of course, some of the people who read my blog will now be saying "What has this got to do with Daoism?" I'd suggest a great deal. There is something in the human psyche that has always made us speculate about the existence of divine beings. In Daoism, this manifested itself in the creation of a huge pantheon of Gods. Some of the more popular ones are:

Jade Emperor

Queen Mother of the West

Lu Dongbin

General Guan Yu

Nezha

&&&&

Why do people create these sorts of stories? I would argue that part of the reason is so the human mind can work its way through a specific type of complex issue. How would a truly wise, beneficent ruler act? Hear stories about the Jade Emperor. How would a truly honourable, loyal general act? Talk about Guan Yu. In the same way, science fiction stories talk about how an AI would act. Would they be pretty much indifferent to humanity, as in Le Guin's novel? Would we be able to meaningfully interact with them as equals, as in Pohl's book? Or would they be incomprehensible "powers" that manipulate humanity like pawns on a chessboard, as in Nagata's series?

The difference between the olden days and now is that we no longer believe in "magic" like the old people did. Instead, we embed our "magical thinking" into science and technology. It is impossible for us to believe that Gods exist, so we have to create them by extrapolating what we have created here-and-now beyond what concretely exists into some sort of plausible extension. But ultimately, it is much the same thing. Reading science fiction is just like listening to stories about the Gods and Goddesses in old temples. The difference is that we can believe in these stories, whereas the old ones seem "impossible" and "archaic".

Digging Your Own Well: Daoism as a Practical Philosophy

Subscribe To

Followers

About Me

Life is a strange journey sometimes. I was born into a small-town, farming family in Southern Ontario, in Canada. But I've always been attracted to oriental philosophy. I joined a taijiquan club when I was a young man and a strange Chinese immigrant initiated me into his Daoist lineage without my really understanding what I was getting into.
That happened about thirty years ago. The man has since died and I have nothing to do with his temple. But the path he started me on has become my life as I work out what it means to be a Western Daoist in the 21st century.