This is essentially a follow-on to yesterday’s post about increasing intelligence (you might want to go back and read the comment by Michael A.). The main idea behind that essay was that intelligence consists of a varied lot of skills, which we’re building one at a time (or at least in separate efforts). When we build a formal, mechanical version of a given skill, we don’t save it to be part of a single huge AI system as if we were building the Forbin Project, but deploy it directly in the form of a software app or machine controller or accounting practice or whatever is appropriate. It gets hooked into the existing huge network of information processing and feedback/control function that forms civilization.

A century ago, that network consisted almost entirely of human brains and ink-on-paper records and messages. The telephone and telegraph were decades old, and just becoming integrated into society. Today, not only are most messages and memories run by machine (and see how many more of them they are), but there are hugely many more of them than there were a century ago.

But the important point is that at no time was any single person (or machine) doing a significant part of the total cognitive work. Even Edison’s biggest invention was the research lab — where he put hundreds of people to work inventing.

A single human is not really an effective thinking machine. A feral child who manages to survive in the absence of language-speaking elders winds up not being able to learn language at all. Of all the ideas, concepts, thoughts, and so forth we use individually, only a tiny fraction are original. Almost everything we know, everything we are, is absorbed from the culture around us.

In very strong sense, our minds are not our physical brains, but the cultural software running on them. Although there is an element of the personal in memories and aptitudes, by and large the same software would run as well on someone else’s physical brain. Or, when we figure out a bit more about it, on a computer.

Building an AI is hard (mental) work. We have been at it for the better part of a century (including cybernetics and the development of computers themselves). I’ve personally worked on it for 35 years. We’re getting tantalizingly close to being able to build a machine which can do the kind of thinking that an individual human can. But when we do build such a machine, it will account for one more human’s effort in the acceleration of the progress of AI. Only when we have as many such machines as we have AI researchers will they significantly increase the rate of progress.

Interestingly enough, most of the advantages a machine mind might have in being a better AI researcher than a human would be had in even greater degree by a program that wasn’t structured to emulate a human mind at all — things like not being distracted by personal or ego concerns, and in particular not having to decide what it wanted to do next. It would just be the problem-solving parts, and it would get all of its motivations from outside. It would be plugged into the network of civilization directly, primarily at the call of human researchers.

In the meantime, other parts of the network currently done by humans will continue to be transferred to machines. Take driving. There’s no need to make a chauffeur robot with a physical body wearing a uniform and a cap: just build it into the car (and take out the controls, for a more comfortable seat). Does it need an ego, a social life, a wife and kids and cat and dog and mortgage? No. What it does need is telepathic contact with other chauffeur robots, courtesy of wireless internet or whatever, in its area and a semi-autonomous, semi-collective algorithm that steers all the cars knowing what all the other cars are going to do.

Likewise most of the other AIs we build. They’ll be built for a purpose, and that purpose will almost always be better served by plugging them into the overall information-providing and goal-setting fabric of our increasingly interconnected civilization. And by he time there are enough AIs that their total thinking and inventive capacity is such as to rival that of our civilization (e.g. to accelerate AI research enough to improve themselves), they’ll be our civilization.

5 Responses to “The Software of Civilization”

Building a generic AI “engine” requires the opposite of engineering. In engineering you start by gathering requirements, choosing technologies and available off-the-shelf components, and estimating the funding/resources you’ll need to get it built. If your estimates are right, and too high, the project doesn’t go ahead. If your estimates are low, the project may go ahead but fail. With an AI engine, the goal isn’t to make a system, it’s to make a component. Trying to make a component without clearly defined requirements (that’s being generous, define “intelligent”) is just tinkering. Tinkering only works on the small scale, and AI is a huge scale task.

Sure, I believe that everything we know is picked up from the world around us. Everything we are is partly influenced by the environment (particularly instruction and example from our family) and partly from hard-wired behavioural tendencies (human nature). The latest brain science is discovering behavioural differences between the sexes in newborns. Physical structural differences in brain tissue also exist between males and females. Someone who believes in ‘the mind’ or ‘the soul’ has no problem with this, but materialists are painted into a corner. How can physical organs which think, function identically when they differ in structure?

Trouble is, we have a whole lot of people who cannot drive a nail or fix a window. Whole sections of cities are rubble while bands of armed thugs who cannot, or will not, speak standard English. Even armed police don’t go into entire neighborhoods alone. So, some of us get smarter, many others just get dangerous.