Could the Turing test, as originally posed, be impossible for a machine?

When Alan Turing sat down to devise a test of whether a machine could be described as intelligent he choose to describe a test based on the imitation game.

‘The new form of the problem can be described in terms of a game which we call the 'imitation game'. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game is to determine which of the other two is the man and which is the woman.’

If, as I believe, intelligence can be described as knowledge applied for a purpose then this test poses an impossible challenge for any machine.

The real purpose of the imitation game is to test the interrogators ability to identify which of A or B is a potential rival and which a potential mate.

While a machine could be programmed to take one of the roles in the game, it has no real stake in the ‘mate or rival’ question.

Closing Statement from Seamus McGrenery

Thank you to everyone who took part in the conversation.

The reason I asked the question was that I was struck by how, in refining the description of a test for machine intelligence, Alan Turing had used an example of a binary choice that is meaningless for machines - that of mate or rival.

Artificial is a word we humans often use to describe the things we make. The latin root of the work is skill. We are of course impressed by the skill of our species in creating all manner of machines.

Humans are animals and, as far as I can see, what we do is ultimately motivated by the survival of our selves, or families, our species.

In evolutionary terms we are living in a period of
extraordinary rapid change. In less than one hundred thousand years our species has become the dominant one on the planet. But the last hundred years has seen our numbers double, then double again. We have become very adept at making tools to promote our peaches. As animals we must be doing something right.

Maybe we should see all of our tools, including computers, first and foremost as things which are helping us to thrive.

We have had less than a century to get used to computers. In gaining an understanding of them it was natural to think terms like 'electronic brain' or electronic mind.

Yet if the computer is really an artificial replication of our brain, then why are we so poor at math. Maybe that is a different discussion.

Aug 16 2012:
I think it's possible, but not gonna happen any time soon. As of right now, there are a lot of differences between a human and a machine. However, there are some pretty stark similarities too. We both process information, have memory, etc.

Once we can understand enough of how the human mind works, and even emotions, we might be able to replicate it artificially (not via genetic engineering). However, when we get to this point, the question is no longer "could we" but "should we."

Aug 20 2012:
"I wonder though if such isolated thoughts and feelings, unconnected to a living body, could actually work in a meaningful way."

Imo, this is what humans already do. An arm or a leg is simply a machine, a hardware piece, being controlled by the brain which has a special cpu for processing and sending/receiving data and a hard drive for storing memory. The only thing is they also have thoughts and feelings in the brain to make decisions too.

If machines ever become as intelligent as humans and have the same sentimental feelings as we do, then the difference between a machine and a human would pretty much disappear.

We can only then treat the machines like humans, like a parent taking care of a child or best buds for best buds, for we are at the mercy of the child in the future. We shouldn't view them as monsters, despite their capabilities of immense negative impacts (just like humans and soon-to-be criminals), we should view them as friends and someone you care for. A machine and a human would become one and the same race of intelligent life.

Aug 22 2012:
I don't think the relationship itself between mind and body is that complex. Like a single muscle is really just a bunch of strings/fibers that can only pull and do nothing else. The body is just a complex machine, as in it's a machine that's made of a bunch of simple machines (pulleys, gears, wedges, etc.) just like a car or a computer. The brain just sends an electrical signal to a muscle and cause it to pull and create tension force.

In computers, they use a bunch of bits, the 0's or 1's, which is the same thing as the on/off light switch. In the most simple form, when the light switch is on, then the hardware will do a certain thing, if it's off, then it would do nothing.

I think the complexity is all in the mind. I have no idea how a thought or an emotion works. I know that hormones and drugs can manipulate someone's emotion, but is the emotion really that simple? Or why is it our inherent nature to want to live, and why would some people choose suicide?

Aug 24 2012:
Personally I do think that the relationship between mind and body is complex. Let me suggest two areas where some of that complexity comes from.

First our mental model of the world is built on the physical capabilities of our bodies. This is a vital necessity in all animals. All animals need to automatically know how fast they can run or how small a gap they can fit through if attacked by a predator.

Secondly much of our though process reuses ideas from our bodies. Our language is littered with examples like 'hunger for success'. This is a subtle and far from trivial link. For example in experiments interviewers who briefly held a warn cup before the interview were much more likely to hire than those who held a cold cup.

Aug 24 2012:
Hmm well it is definitely a more complex feeling to control your body more directly, compared to controlling a pencil or a tool, which are extensions of the body, but the complexity still lies solely in the mind and how it interacts with the hardware that it's been given, because it is the mind that's self-aware, not the body.

"All animals need to automatically know how fast they can run or how small a gap they can fit through if attacked by a predator."

I mean sure they do it innately because their brain would be in "panic-escape-button" mode, but it doesn't mean the animals themselves are self-aware of their own actions.

Aug 27 2012:
I think that AI can be used in strictly defined situations. In the Turing test I do not believe that AI woud be a requirement. It is simply looking for indicators. Descriminators can be entered into most programs.

Intelligent machines must be limited to answering questions. Giving them any kind of internal purpose could result in loss of control. The moment such a machine becomes smarter than we are, it becomes completely unpredictable.

By the way, I would not consider the 'imitation game' as a new form of Turing's test. As you point out, the imitation game requires an internal purpose, whereas Turing's test did not. This is a very significant qualitative difference.

Aug 22 2012:
" If the link with spreading genes is severed from the trait of intelligence what is intelligence for? "
Your hands are developed for spreading genes too.
But they can also grab stuff.
A robot hand can grab stuff.

" And if technology has actually, in some way, started its own replication process how would artificially intelligence fit in with that? "

Aug 24 2012:
My own personal definition of intelligence is knowledge applied for a purpose.

It can be very easy for us to fall into the trap of believing that intelligence is capability in the things that we are good at - for example there are some people who think that only those with STEM degrees should be allowed to vote - this seems to me to ignore why intelligence evolved.

To me intelligence is a capacity that has developed to help enable biological organisms to spread.

Going back to your analogy;
May hands grab stuff, ultimately to help ensure the spread of my genes.
I can also use a robot hand to ultimately ensure the spread of my genes.
A robot does not have genes to spread so its hands are ultimately used for someones purpose, not its own.

Aug 25 2012:
If intelligence is " knowledge applied for a purpose ", then we already have A. I .

"A robot does not have genes to spread so its hands are ultimately used for someones purpose, not its own."

What's the difference? What is physically different about the grabbing in either case?
Take something easier : a needle on a plant. its purpose may be to help spread genes, but there is nothing special about the needle that couldn't be perfectly imitated, even if it served another purpose. Right?