Artificial & Machine Intelligence: Future Fact, or Fantasy?

While a warning about the application of AI and intelligent machines (IMs) in the area of military application might be appropriate, what if they're just a natural branch on the the tree of natural evolution?

An eminent group of scientists -- Stephen Hawking, Stuart Russell (Berkeley), and Max Tegmark (MIT) -- perhaps stimulated by the film Transcendence, and possibly even the recent EE Times debate “Robot Apocalypse” led by Max Maxfield, has issued what might be considered a warning about the possible danger of robots and artificial intelligence (AI). Such a warning about the application of AI and its derivative intelligent machines (IMs), especially in the area of military application, might be appropriate. But what if IMs are really just a new branch on the tree of evolution that has led us from the original Protists to where we are today (see Figure 1 below)?

Figure 1. Are artificial and machine intelligence the next step in evolution for humans?
(Source: Ron Neale)

The danger may not be some sort of catastrophic accident that ends human existence but the uncontrollable forces of evolution, resulting in the same outcome. Fear not, because in my view, for IMs to come into existence requires a unique evolutionary key. In Figure 1, I have introduced the concept of Synergistic Evolution (SE) to describe that key, and it is that aspect of evolution that suggests why it might not ever occur.

Synergistic Evolution (SE) requires a species to be aided in its evolutionary process by another species. This is not the same as acting as a food stuff, where the existence of an earlier species acts as the food or fuel that allows those higher up the chain to exist and evolve. Or where species like dogs or horses that exist at the same time, on a different branch, allows a species to more easily obtain food to exist and evolve.

The nearest equivalent example of SE might be a species variation such as selective breeding (unnatural selection), where human intervention is used to provide a characteristic, such as additional meat or milk in cattle or in hunting animals, dogs, or horses.

In any flight of fancy, I think the three options as illustrated in the next chart from left to right must be considered as possibilities: the first option of the evolution of some very clever tools, weapons, and body parts that become an integral part of the human species tree; or the second option as originally drawn in Figure 1 of a new branch on the tree of evolution; or the third option an extension of the human branch.

I have not attempted to provide a time scale for the vertical part of Figure 2, although I was very tempted to suggest that the horizontal scale from left to right might be considered as possibly a log scale of bovine excrement.

To be or not to be
As it will be the products and efforts of the electronics industry and its people that make possible this next step on the tree of evolution, if there is danger ahead, will they let it happen, or will they even be able to control it? Or will the artificial intelligence reach a level where it will understand the nature of human emotions and manipulate something like greed or desire to create an environment leading to the required IMs? Manipulation now plays a key role in politics and life, and it results from a misuse of some of the products of the electronics industry.

What will an IM species look like? Will it have a human-like form? Evolution has provided us humans with a pretty good engine, which consumes a variety of readily available food and oxygen. If the IMs copy that, then some of their parts might have human characteristics.

I think the important thing to keep in mind is that until now machines were very limited in what they could do but autonomous machines are something that we must address because they will become a reality in the near future because we already have the technology to create autonomous attack drones see http://news.sky.com/story/1259885/ban-killer-robots-before-they-even-exist

What is clear is that ultimately machines will design machines and if computers can be designed to render themselves obsolete by designing their successor it will lead to recursive self-improvement in other words the computer will be able to make adjustments to its own capabilities without human intervention resulting in ongoing improvements. In effect each improvement could be more significant than the last leading to a rapid rate of evolution that could beat anything possible in nature in terms of speed. It is this theory that each step will yield exponentially more improvements then the previous one that is the basis of the theory of the singularity.

Chrisw270 & Poppycock. My article was intended to raise the question rather indicate our future. I think the key to the possibility of it happening in the extreme is the need for what I have called Synergistic Evolution. If you can find an example in natural evolution where one species has aided the evolution of another then it might be possible for it to happen again. As you will note for my Fig 2, I suggested the use of the variable Bovine Excrement on a log scale from left to right to indicate the possibility of a particular events occurring. I hope that conveyed my sentiment that sophisticated tools, body parts and weapons are most likely near term outcome.

You claim that machines only do what we want them to do. I think you must accept the fact that it is possible for an upset in hardware and complex software for equipment not to do what is intended. Let me give you an example, from my background in rad hardening. I will use the "muchsimplified" example of launch and leave strategic weapons, where the target information is in the memory. Assume the memory is not radiation hard, then incident radiation from what ever source could change the memory and change the target. You can extend the example to drones with people recognition equipment if you want. Yes I know about triple redundancy and error detection and correction (although of late in dealing with claims in relation to the capability of FEC and memory some personal doubts have crept in).

I use the radiation effects example because as well as other environmental effects radiation may have played a part in modifying the human genetic structure to produce small species variants that had a better chance of surviving and helped our species on our way.

Living things self-replicate with variation and this over time gives rise to evolution. Machines, on the other hand, are a result of cooperative assembly - it takes many machines to make a machine. How can the whole system of machines evolve together rather than selfishly compete? Machines only exist because humans decide what machines to make and set everything up.

Also why would a machine want to make more like itself? We are wired that way because we are the result of millions of years of evolution that favoured creatures that were good at reproducing. But machines don't have to be wired any particular why at all. Every time youi imagine a rogue machine that wants to take over the world just imagine another machine that wants to stop it. Machines can want whatever we tell them to want.

I had focused on AI in graduate school, but, for the last 25+ years I've been doing processor design. Admittedly I'm impressed by our ability to keep doubling the number of devices in same area about every two years but I just think it's total poppycock (you young engineers go look that up in a dictionary, um, or google it :-) to say were going to have some kind of sentient AI that can best us.

Every time I see some press release out of the likes of something like "The Singularity" conference or Ray Kurzweil, etc. my eyeballs roll back into my head.

First thoughts our, oh boy, they must be needing additional funding or something from DARPA and so here's the hype machine going again!

To me it's simple, look at the human brain, it has around 100 billion neurons, and most of these neurons have thousands of connections to neighboring neurons.

What's the best we have in a computer chip today, 2.5 billion? Most of those are six transistors coupled together to form a bit cell. In logic gates, typical fanout (connections) is around 2 or 3. Not thousands of connections a analog neuron has.

While I oversimplify I believe the comparison is fare is show how very, very far we have left to start worrying about some cognizant machine tweaking with us.

Don't get me wrong, I hope we one day get to that point, I for one have a positive outlook on our new robot overlords keeping us in check!

In 1984, Scientific American published an article on a game called "core wars". The premise was that two or more programs, each of which was allowed to execute one instruction per turn, would try to eliminate all of the competitor programs. The game was won when only one program remained functional (or was the last program with instructions that could be executed). These programs were executed in a "sandbox", allocated to the execution space, and using an interpreted programming language. One of the first strategies developed was the use of replicating code.

It was not long after this that the first virus and worm programs appeared. While written by humans, these pieces of code were designed to replicate themselves and spread, similar to early life. We have seen these evolve from purely destructive functions to stealthy infections which are designed to compromise data and security without user awareness. Much of today's malware collects information and passes it along to a server using the internet.

Programming itself has evolved a long way from the rudimentry single task functions of early computing. We now have distributed programs which harness the power of many computers connected through the internet to accomplish truly herculean tasks. The programs which form the basis of our interface to computers now have hundred's of thousands to millions of lines of code, and many of the applications we use are the same.

All of this mimics the evolution of life, condensed from billions of years to only a few decades. Life evolved in large measure by the error's encoded into the programming of life. Will real computer awareness result from a similar mechanism? For now at least, we still control the "food" and replication means for machines.

The Terminator series have forever seared the idea of AI's turning into SkyNet, the military AI that decided people were a threat to its existence and they had to be eliminated.

The Janus Project is a story by James P. Hogan that brings up a similar problem, but basically one of unintended consequences. The story starts with surveyors on the moon asking the computer to estimate how long it would take to clear this mountain range for construction of a new linear acceleration catapult. The computer answers, "15 minutes." laughing, the surveyors tell the computer to execute the plan and are almost killed because the computer directed the existing catapult to start dumping its loads onto the mountain range to blow it away. The rest of the book is devoted to trying to teach the computers the consequences of their actions.

Fictional stories, but we may see instances where the AI's are going to reach unexpected solutions to problems they are presented with. The same way your child may produce a solution different from what you expect.

It's not going to be as easy as installing Asimov's Three Laws of Robotics. Besides, he spent most of his stories illustrating how inadequate they were. Jack Williamson's With Folded Hands demonstrated those rules taken to extreme were not good either.

I don't want to sound down on AI's, I really like the idea, but we're fooling ourselves if we think they are going to turn out to be exactly what we think they are going to be.