Turing through the Looking Glass

I’m in the section about Turing. I have enormous respect for him and think the Turing Test was way ahead of its time. That said I think it is flawed.

It was defined at a time when human intelligence was considered the pinnacle of intelligence. Therefore it tested whether the AI had reached that pinnacle. However if the AI (or alien) is far smarter and maybe far different it might very well fail the Turing test. I’m picturing an AI having to dumb down and or play act its answers just to pass the Turing test, similar to the Cynthia Clay example in your book.

I wonder if anyone has come up with a Turing type test that maybe focuses on intelligence, consciousness, compassion (?) and ability to learn and not on being human like?

This is a question that is far more critical than may appear at first blush. There are several reasons why an AI might dumb down its answers, chief among them being self-preservation. I cite Martine Rothblatt in Crisis as pointing out that beings lacking human rights tend to get slaughtered (for instance, 100 million pigs a year in the USA). I think it more likely that at first AI intelligence will evolve along a path so alien to us that neither side will recognize that the other possesses consciousness for a considerable period.

I too have enormous respect for Turing; I am chuffed to have sat in his office at his desk. A Turing Test is by definition a yardstick determined by whether one side thinks the other is human, so to ask for a variant which doesn’t gauge humanness would be like asking for a virgin Electric Iced Tea; nothing left to get juiced on. But if we’re looking for a way to tell whether an AI is intelligent without the cultural baggage, this question takes me back *cough* years ago when I was in British Mensa, the society for those who have high IQs (…and want to be in such a society). Two of the first fellow members I met were a quirky couple who shared that the man had failed the entrance test the first time but got in when he took the “culture-free” test, one which doesn’t have questions about cricket scoring or Greek philosophers.

He was referring to the Culture Fair test, which uses visual puzzles instead of verbal questions. That might be the best way we currently have to test an AI’s intelligence; I wrote a few days ago about how the physical world permeates every element of our language. An AI that had never had a body or physical world experience would find just about every aspect of our language impenetrable. At some point an evolving artificial intellect would have problem assimilating human culture, but it might have to have scaled some impressive cognitive heights first.

But what really catches my eye about your question is whether we can measure the compassion of an AI without waiting for it to evolve to the Turing level. It sounds like it’s too touchy-feely to be relevant, but one tweak – substitute ethical for compassionate – and we’re in critical territory. Right now we have to take ethics in AI seriously. The Office of Naval Research has a contract to study how to imbue autonomous armed drones with ethics. Automated machine guns in the Korean DMZ have the ability to take a surrender from a human. And what about self-driving cars and the Trolley Problem? As soon as we create an AI that can make decisions in trolley-like situations that have not been explicitly programmed into it, it is making those decisions according to some metaprogram… ethics by any standard. And we need right now some means of assuring ourselves as to the quality of those ethics.

You may notice that this doesn’t provide a final answer to your final question. As far as I know, there isn’t one yet. But we need one.

Posted by Peter Scott

Peter Scott’s résumé reads like a Monty Python punchline: half business coach, half information technology specialist, half teacher, three-quarters daddy. After receiving a master’s degree in Computer Science from Cambridge University, he has worked for NASA’s Jet Propulsion Laboratory as an employee and contractor for over thirty years, helping advance our exploration of the Solar System. Over the years, he branched out into writing technical books and training.
Yet at the same time, he developed a parallel career in “soft” fields of human development, getting certifications in NeuroLinguistic Programming from founder John Grinder and in coaching from the International Coaching Federation. In 2007 he co-created a convention honoring the centennial of the birth of author Robert Heinlein, attended by over 700 science fiction fans and aerospace experts, a unique fusion of the visionary with the concrete. Bridging these disparate worlds positions him to envisage a delicate solution to the existential crises facing humanity. He lives in the Pacific Northwest with his wife and two daughters, writing the Human Cusp blog on dealing with exponential change.

Share this:

Like this:

2 Comments

Ethics versus compassion is an interesting dilemma as I’ve found both to be quite vague. I’ve never seen a really convincing definition of a universal ethics that most (within and between species) would agree is ‘right’ despite great efforts by Sam Harris and others. I suspect that humans would be very concerned when it comes to super intelligent AIs as to their willingness to be compassionate toward humanity. Great book Peter!

Well, that’s part of our existential problem, that there is no good definition of what ethics means for humans yet, Good for some people but not for a consensus means not good overall. My own take is that ethics are “defensive programming,” rules to be followed in situations where there’s no specific guidance. The Three Laws of Robotics are kind of a start, but don’t help much in our current dilemma.