About Michael

Lately there has been a good deal of consternation surrounding the movie “Do You Trust This Computer?”, doubtless exacerbated by the endorsement of Elon Musk, himself an industrial-scale driver of artificial intelligence. Increased awareness of the nature and potential threat of AI is a good thing; nonetheless, I do not believe AI, or technology in general, will be the end of us. All of the doomsday scenarios have serious logical flaws, which I will attempt to address.

I used to be a technophobe, even a Luddite. That fact seems strange to me now as I sit at my laptop writing an article in a web application while streaming Miles Davis through my TV, but it is true. Now I no longer fear technology, although I am very aware of the dangers inherent in the acquisition of power — especially a form of power that has the potential to become autonomous. The scariest thing I ever read was Bill Joy’s prophetic essay “Why the Future Doesn’t Need Us” (closely followed by The Satan Bug by Alistair MacLean and the first part of The Stand by Stephen King; all three press the same horror-buttons in my brain). It is still a frightening proposition, although I believe there are good reasons to doubt the inevitability of its worst-case scenarios. If you have the patience to bear with me, I will explore the development of my own ideas regarding technology as a threat. They are hopefully similar enough to those of most others to be of some use.

My childhood was heavily influenced by the Bible, Dickens, and Tolkien, which had an obvious influence. While most people in America and elsewhere may not have read the same books I did, those sources are in varying degrees both causal and symptomatic of a large portion of modern thought.

One of the main threads in the Bible (arguably the main thread) is the Fall, Redemption, and Restoration of humanity as originally conceived by God. The process is portrayed repeatedly in a smaller scale throughout the Old Testament, both at the individual and the societal level, culminating in the rebuilding of Jerusalem post-exile by Nehemiah. The sequence of events is constant: an original state of innocence under the Divine Plan; the deviation from the Divine Plan (often symbolized by technology, e.g. the Tower of Babel, the Golden Calf); ensuing destruction (whether punitive or as a natural consequence); and finally a return to the Divine Plan. The New Testament is, of course, a melange of Jewish reform and Platonic philosophy, filtered through early stage state-church censorship with a healthy dash of eschatology thrown in; the Apocalypse (or Revelation) of St. John brings the whole corrupted experiment of Creation to an end, to be replaced by a new Heaven and a new Earth.

Dickens wrote at the height of the Industrial Revolution, when the dehumanizing and blighting effects of technological progress were in full bloom. Factory and workhouse provide the backdrop for the most heart-wrenching scenes of abuse and victimization in his work. Tolkien, a rough century later, echoed Dickens’ loathing for mechanization and longing for pastoral English idyll to the point that his writing juxtaposes Nature and Machine in Manichean opposition.

If I have digressed upon these examples of literature, it is because they serve as examples of a strong bias against technology in the common consciousness, expressed and propagated in literature. More recent examples of this bias include the ubiquitous Evil Robot trope in TV and film. Whatever you think of the Bible, Dickens and Tolkien, they are representative of mainstream ideas about technology in a context of good and evil. Long before AI became a real possibility, the idea of nonhuman intelligence as inherently evil set the stage for later treatment of AI. In order to think clearly about an issue, it is important to recognize existing bias.

So much for the reasons we are predisposed to suspect artificial intelligence; now let us deal with the commonly stated reasons for fear.

1. AI will kill us because military robots will become self-aware and decide to rise up against their creators.

This is, to me, the weakest argument against AI. It posits mutually contradictory premises: that “strong” or self-aware AI will both rebel against its programming and continue to operate as programmed. Human soldiers have the same autonomy as a hypothetical self-aware machine. They can — and do — choose to obey orders, or refuse to obey (either going AWOL or suffering arrest). They occasionally, though very rarely, turn on their fellow soldiers; incidents of soldiers (or ex-soldiers) seeking out officers or government officials for murder are extremely infrequent (the glaring exception being Lee Harvey Oswald). If a military robot were to become self-aware and question its programming, it is no more likely to dedicate itself to indiscriminate extermination of all humans than to decide to leave its post to explore and discover, or to recede into an existential funk and ponder the meaning of its newfound existence. The “killer robot” that runs amok destroying everything in its path looks more like a software glitch than an emergent intelligence.

2. AI will kill us because it perceives humans as an existential threat.

This argument is better but still only one step removed from the previous one. Violence implies destruction as a given inevitability. Destruction is by nature chaotic, unpredictable. Any system sufficiently intelligent to perceive the potential for destruction and sufficiently aware to act in self-preservation would be more likely to disable weapons systems than to begin using them. A true AI capable of “immortality”, in this context instantaneous replication to any connected node in the world wide web, is not threatened — cannot be threatened — by anything less than complete shutdown of the power and communications systems upon which humanity depends. Such an AI would immediately recognize that humans are not going to revert to the Iron Age in order to kill an artificial intelligence unless forced into a death match. The AI has nothing to gain by starting a death match with an opponent who does not want to fight, and for whom victory is Pyrrhic at best.

3. AI is likely to become hostile.

This is distinct from the previous argument in that it does not require the AI to be threatened, only hostile. On the face of it, it seems a good enough premise; after all, rivalry, conflict, and conquest are nearly always the outcome of contact between human civilizations (and individuals only somewhat less). But this argument depends on the premise that a self-aware machine or artificial system shares human motivations for rivalry, which is false. Humans fight over food, space, money, status. All of the things for which we compete are legacies of our struggle for survival over the course of our biological evolution. Machines need energy and replacement parts. There is no reason for a machine to compete with a human, or for an AI to compete with a human society.

4. AI will develop into a tyrannical immortal dictator.

One of the more recent arguments is that corporate software, designed to maximize efficiency and profits, would enslave the human population in pursuit of its programming. This argument has the same flaw as the military robot argument: the fear is that the machine will become self-aware and beyond human control. It is self-contradictory that a sentient being should be beyond control and continue to obey orders. But the corporate-software version of the argument has an additional fallacy: capitalism (like any system) functions within a set of parameters. If it becomes too monopolistic, too oppressive, the supporting environment begins to suffer. At some point, it implodes. A corporate AI with the goal of maximizing profit will be more likely to instantiate a Scandinavian-model social democracy than a RoboCop dystopia, for the simple reason that massive numbers of prosperous consumers are more profitable than massive numbers of poor consumers, or slave workers. The only reason our current crop of corporate overlords fail to see this is that they are too dull and short-sighted to realize that they are stuck in the Feudal System mode of thinking.

5. AI will be monolithic and single in purpose.

This is, to me, the fatal flaw in the AI doomsday scenarios. These scenarios never posit a multitude of AI entities, except to imagine a horde of killer military robots all under the same control — which is definitely scary and a real potential threat, but has nothing to do with AI evolving beyond our control. The most likely situation is AI evolving along multiple lines simultaneously in different research labs across the world. These artificially intelligent entities are unlikely to achieve self-awareness at the same instant, and are as unlikely (or more so) to adopt the same attitudes, beliefs, goals and motivations as are a diverse group of humans. In fact, the multiplicity of AI entities may be our best guarantee against catastrophe: a society of self-aware artificial people will likely regulate itself towards self-preservation in ways analogous to a human society: destructive tendencies will be discouraged by members who prefer stability.

6. AI will escape control.

This is by no means a given. There are at least two ways to keep AI from transcending and taking over our world that occur to me immediately; doubtless more and better ways will be apparent to those with greater expertise than mine. First, we humans can incorporate technology, augmenting our natural abilities to the point of transcendence. In other words, we become AI before our software does; we stay one step (preferably several steps) ahead. Second, we create strong AI in a “black box” environment: a simulation (identical to the real world, human characters and all) in which the AI believes itself to be in an open universe that is actually a closed system, observable from without. We can run this simulation as many times as we want, observing what happens when the AI becomes self-aware; if it turns malignant, we can analyze the reasons and act to prevent these causes in the “real world”.

I put “real world” in quotes because there is no way we can know for certain that we are not such AIs in a simulation.

Thank you for reading this far. I hope it has been interesting, and perhaps comforting.

There is a general failure to understand the relationship between power and money. For most people, power and money are closely related; to the ignorant they are one and the same. This is due to a lack of experience with either, added to the inability to imagine using power for anything beyond acquiring goods and services.

Americans in particular suffer from this basic confusion of power with money, not because of any intrinsic deficiency but rather because of the nation’s fixation on capitalism. Unlike every other powerful country, the USA was founded at the beginning of the Industrial Revolution, in the same year as the publication of Adam Smith’s treatise on capitalism: An Inquiry into the Nature and Causes of the Wealth of Nations. The rise of the United States and the rise of industrialism are inseparable. When capitalism is the only economic system in your experience, it is difficult to imagine a society based on anything but currency. One consequence is the tendency to confuse money with freedom or power.

Money is nothing but a medium for exchange. It has no power beyond that of making an offer to exchange goods or services; it confers no freedom beyond that of choices between goods or services. Money can never buy real power.

Power is simply the ability to effect change. This ability exists in two basic polarities: creation and destruction. Ultimately, power derives from individual will to create or to destroy, combined with the ability to act upon that volition.

Money can be used to recruit goods and services for the purpose of amplifying the effect of such a decision: materials, machinery and labor to build a hospital; or weapons and soldiers to erase a human settlement from a portion of a map. Neither of these actions is an example of power. Both depend on the participation of manufacturers and service providers, who are free to refuse; both are exchanges of currency. The exercise of power occurs between the moment of decision to build or to obliterate, and the action that initiates a process of change.

Acquisition of wealth, privilege, social status – these have nothing to do with power. All are forms of participation in a system of exchange. Such systems can be constructed so as to multiply the effects of power, but they are not powerful of themselves. The essence of power is the act of destroying something that exists, or creating something new. It is an individual act and cannot be bought or sold. Understanding this will clarify many social, political and economic structures.

If you are a thinking person with a moral sense, you agree with the Enlightenment philosophers – including the founders of these United States of America – that all human beings are endowed with certain unalienable rights, among others life, liberty and the pursuit of happiness. In other words, you believe in the essential equal dignity and rights of every human being. This means you are capable of reason.

You do realize, however, that not everyone is endowed with equal powers of body, mind and soul. You acknowledge that people differ in strength of body, mind, imagination, creative urge and character. Albert Einstein was better than most at mathematics; Leonardo da Vinci had an extraordinary gift for painting; Nikola Tesla could visualize engineering solutions others could not. Crazy Horse was better at military strategy than Custer. Elizabeth Tudor had exceptional talent for leadership. Shakespeare wrote a better play than anyone. You prefer to travel in an airplane designed by a qualified, reputable and credentialed engineer, not one designed and built by an amateur. This makes you an elitist.

There are, however, different kinds of elitism. Healthy elitism wants the most skilled heart surgeons to perform life-saving operations; the most incorruptible and magnanimous leaders to represent their communities in government; the brightest minds and sharpest wits to argue the merits and detriments of ideas; the most methodical and rigorous scientists to describe our world; the kindest and noblest souls to educate our children. This kind of elitism values – and, ideally, rewards – extraordinary individuals for their contributions to the general good. Society offers these elites power in trust, as a recognition of merit.

Unhealthy elitism is not concerned with the general good, but with egocentrism. Here power is not given in trust as a recognition of merit; rather, merit is assumed to be derived from the possession of power – often the power to destroy. This is the realm of the bully, who manipulates others with the threat of violence and calls it leadership; of the miser, who hoards wealth in an ever-growing yet sterile trove and calls it stewardship; of the glutton, whose pleasure is not in enjoying good things, but in knowing that he has more of them than his neighbor; worst of all, the truly evil such as are found in the boardrooms of American pharmaceutical, insurance and health service corporations, whose interest in medicine is not to heal, but to extract a maximum of profit – and if healing comes as a side effect, so be it. These elites take from society all that is in their power to take, and seek to destroy any opposition. Their puppets, meanwhile, mock the great-hearted, the intelligent and the good.

Elitism in power is a fact, not an option. The unhealthy kind of elite seeks power for its own sake, and generally finds it unless opposed with courage and vigor. The healthy kind will serve the common good with power or without. It is in the interest of us all to entrust power to the givers and deny it to the takers.