After years of trying, it looks like a chatbot has finally passed the Turing Test. Eugene Goostman, a computer program posing as a 13-year old Ukrainian boy, managed to convince 33% of judges that he was a human after having a series of brief conversations with them. (Try the program yourself here.)

Most people misunderstand the Turing test, though. When Alan Turing wrote his famous paper on computing intelligence, the idea that machines could think in any way was totally alien to most people. Thinking – and hence intelligence – could only occur in human minds.

Turing’s point was that we do not need to think about what is inside a system to judge whether it behaves intelligently. In his paper he explores how broadly a clever interlocutor can test the mind on the other side of a conversation by talking about anything from maths to chess, politics to puns, Shakespeare’s poetry or childhood memories. In order to reliably imitate a human, the machine needs to be flexible and knowledgeable: for all practical purposes, intelligent.

The problem is that many people see the test as a measurement of a machine’s ability to think. They miss that Turing was treating the test as a thought experiment: actually doing it might not reveal very useful information, while philosophizing about it does tell us interesting things about intelligence and the way we see machines.

Fooling the Humans

Some practical test results have given us food for thought. Turing seems to have overestimated how good an intelligent judge would be at telling humans and machines apart.

Joseph Weizenbaum’s 1964 program ELIZA was a parody of a psychotherapist, bouncing back responses at the person it was talking to interspersed with random sentences like “I see. Please go on.” Weizenbaum was disturbed by how many people were willing to divulge personal feelings to what was little more than an echo chamber, even when fully aware that the program had no understanding or emotions. This Eliza effect, where we infer understanding and mental qualities from mere strings of symbols is both a bane and a boon to artificial intelligence.

“Eugene Goostman” clearly exploits the Eliza effect by pretending to be a Ukrainian 13-year old. Like most successful chatbots, Eugene manages the discussion so as to avoid certain topics. He might not have any information about a certain historical event or a place so he would divert the conversation onto something else if asked about them.

A real 13-year-old could probably solve simple logic problems, while Eugene could not, so if asked to solve a problem, the program would refuse to participate. But Eugene is posing as a teenager so it is perfectly plausible that he too, might refuse to participate if he were a real human, in a sign of the recalcitrance typical for his age.

Falling for Machines

The real art here – and it is well worth recognizing that it takes skill to develop systems like this – lies in constructing the right kind of social interactions and responses that manipulate the judge into thinking and acting in certain ways. True intelligence could be helpful, but social skill is probably far more powerful. Eugene doesn’t need to know everything because a teenager wouldn’t know everything and can behave in a certain way without arousing suspicion. He’d probably have had a harder time convincing the judges if he had said he was a 50-year-old university professor.

Why do we fall for it so easily? It might simply be that we have evolved with an inbuilt folk psychology that makes us believe that agents think, are conscious, make moral decisions and have free will. Philosophers will happily argue that these things do not necessarily imply each other, but experiments show that people tend to think that if something is conscious it will be morally responsible (even if it is a deterministic robot).

It is hard to conceive of a human-like agent without consciousness but with moral agency, so we tend to ascribe agency and free will to anything that looks conscious. It might just be the presence of eyes, or an ability to talk back, or any other tricks of human-likeness.

So Eugene’s success in the Turing test may tell us more about how weak we humans are when it comes to detecting intelligence and agency in conversation than about how smart our machines are.

We spend much of our time behaving like chatbots anyway. We react habitually to our environment, much of our conversation consists of canned responses or reflections of what the previous speaker said. The total amount of actual intelligent decisions we make over a day is probably rather small. That is not necessarily bad: a smart being will minimize effort because constantly thinking up entirely new solutions to problems is wasteful.

We should expect descendants of Eugene Goostman to show up in our social environment more and more. The real question is not whether they can think, but what other systems they are connected to. If we play the technological game well, we might create vast systems of software and people that are smarter than their components. Some doubt whether they could actually think, but if they act smart and we benefit from them, do we really care?

Cartography shows Euclid is incomplete. Newton is really relativity and quantum mechanics (but not both simultaneously – they’re incomplete, too). Physics’ past 45 years is despicable not for being empirical curve fitting, but for refusing to admit failure and repair a fundamentally poor assumption (thus being no more respectable than macro-economics). Contemporary physics fails Galileo’s performance standard for science.

There is no measurable observable that violates the Equivalence Principle – single atoms of hydrogen vs. antihydrogen to pulsar-solar star binaries. Parity violations, symmetry breakings, chiral anomalies, baryogenesis, Chern-Simons repair of Einstein-Hilbert action. Geometric chirality can be observed and calculated, but not measured. Geometric chirality is absolutely discontinuous, therefore outside Noether’s theorems. Do single crystals of opposite geometric chirality alpha-quartz, enantiomorphic space groups P3(1)21 versus P3(2)21, violate the Equivalence Principle? The answer is important for being outside physics ability to predict – and therefore unfundable as experiment. Look.

creek house

So.

Fred Barker

Uncle Al’s post is clearly produced by a bot or jargon generator program.

http://www.mazepath.com/uncleal/qz4.htm Uncle Al

An axiomatic system empirically fails if one postulate is empirically falsified. Draw a planar map of the Earth’s surface absent distortion, cutting, and folding. It cannot be done. That is why the Shroud of Turin is a trivial fraud – and can be reproduced with a slab of plaster, some denim cloth, and a kitchen oven as a high school science project,

Euclid is incomplete. Thurston showed there are eight primary geometries of 3-space. The AI hardly rises above the level of ELISA. One need only know where to poke (as with any psychologist). Physics arises from the isotropy of space towrd matter. This can be disproven on a bench top in existing equipment and commercially available materials – using chemistry. Somebody should look.