Posted
by
samzenpus
on Wednesday July 09, 2014 @08:59PM
from the why-did-you-program-me-to-feel-pain? dept.

meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.

Don't be, it's not nearly as difficult to get into as it's reputation states (somewhat like Dark Souls in that respect in fact). Just read some getting started tutorials from the dwarf fortress wiki and play. The most used keyboard shortcuts are no more difficult to learn than shortcuts in any other program and DF always displays all available key commands anyway (although in some menus you might need to scroll). It's great fun even if you don't get all the complexities. It's somewhat like The Sims on stero

and there are quite a few human pairs for which one would not be able to convince the other that they were speaking intelligibly, either.

it is irrelevant. it is only necessary for one computer (however that's defined) to pass this test. i don't see how it's really any better than Turing though. it's a nice idea, it seems even more vague than the Turing test.

I invented the soleless shoe. To fool the PHB and let me walk around the cube farm, almost barefoot.

My life's ambition (yet unfulfilled) is to invent a new crime. You'll hear about it and say: 'that has to be illegal'...Not as easy as it sounds. Damn 'Computer Fraud and Abuse Act' makes anything remotely related to a computer, that a federal judge doesn't like, a retroactive crime.

So the best I've got is giving dangerous advice on the 'net. BTW did you all know that you can make a miracle cleaning flui

Probably every day, BUT it does go to the point with this one. We're still trying to recreate an idealized human rather than actually focusing on what intelligence is.

My cat is undeniably intelligent, almost certainly sentient although probably not particularly sapient. She works out things for herself and regularly astonishes me with the stuff she works out, and her absolute cunning when when she's hunting mice. In fact having recently worked out I get unhappy when she brings mice from outside the house and into my room, she now brings them into the back-room and leaves them in her food bowl, despite me never having told her that that would be an accepatble place for her to place her snacks.

But has she created an original work? Well no, other than perhaps artfully diabolical new ways to smash mice. But thats something she's programmed to do. She is, after all, a cat.

She'd fail the test, but she's probably vastly more intelligent in the properly philosophical meaning of the term, than any machine devised to date.

You and I are constantly having original thoughts while walking and chewing gum at the same time, thing is they not impressive enough to be called "original". The test in TFA just extends the psychologically comforting idea that intelligence is something unique to higher life forms, yet when I was at school in the 60's intelligence was generally considered to be unique to humans, animals were generally considered to be instinctual automata, which likely explains why Turing defined AI as the ability to hold

The most ridicule part being "must not be able to explain how". That doesn't even make sense for humans! If you ask artists, they'll tell you what their influences are, if you ask critics, they'll tell you why this particular piece of art was made this way and not in a completely different manner.

Fun fact: any program with yet unseen bugs that make their behavior totally unexplainable to their developers has passed the test. That gives you either an idea of the soundness of this crap, or a deep insight of w

Well, no human alive today in any case. All so-called "original" works produced today are derivatives of older works (Shakespeare, folklore, etc) or quirks produced by the artist's mental state. Among deceased artists Van Gogh and Edgar Allan Poe are famous examples. Another reason why we should stop this "all rights reserved" nonsense of the traditional copyright system, where the artist is presumed to be a god that produces unique worlds out of nothing.

The machine's designers must not be able to explain how their original code led to this new program

That is a flatly ludicrous requirement, far in excess of what we would ever even consider applying to determine if even a human being is intelligent or not. Hell, if you were to apply that standard to human beings, ironically, many extremely intelligent people would fail that metric, because in hindsight, you can very often identify precisely how a particular thought or idea came out of a person.

I do recall reading a while back experiments done with AI in which programs compete for resources by generating programs to do tasks given to it (computing sums etc). Some programs did generate code that were completely unexpected.

It raises the question programs that are evolved are designed by the programmer or the program, or the process of evolution. And it also raises the philosophical question about whether we should be more humble and accept that our "creativity" that we think is what makes humans intelligent could be nothing more than a process of the evolution of ideas (I hesitate to use the word meme) that we don't actually originate nor control.

If we consider programs that can create things through evolution as "intelligent", that would ironically make natural selection intelligent, since DNA is a digital program that is evolved into complex things over time that can't be reduced to first principles.

The machine's designers must not be able to explain how their original code led to this new program.

Whoa, whoa, whoa. I have severe problem with this. This is like looking at obscurity and declaring it a soul. The measure of intelligence is that we can't understand it? Intelligence through obfuscation? There should be no way for a designer to not be able to figure out why their machine produced what it did given enough debugging.

The way I interpret the test is that the output must not be intended to be produced by some pre-programmed process. Not that you couldn't debug it which would obviously be impossible on anything short of a quantum computer.

On the other hand, I claim that if I train a neural network on some sheet music, it would be able to produce a new melody. And that melody would not be in any way pre-programmed (like a child learning from experience is not pre-programmed), and it will be original. Where can I collect my prize?

...then all the computer will have to do is string together a series of random English words till it puts together something that sounds like a short story written by a Hungarian first-grader for whom English is a second language.

I don't care what they call the test. It's useless if the grading rubric is rigged to allow any idiot to write something that passes. Now, if you'll excuse me, I'm going to go see if I can talk ELIZA into writing me something that would function as an epistolary novel.

It's actual happened a lot, it's called 'emergent behavior'. The paper is old, poorly thought out, and written by people who want other people to think that are smart, but aren't actually smart enough to do science, you know: philosophers.

remember kids: philosophers are to science what homeopaths are to medicine.

I know what emergent behavior is, I was merely making the point that it has already been observed in software systems and that it (at least from my POV) satisfies these requirements. (And what exactly is poorly thought out about Thompson's research?)

Without science, philosophy is useless. Philosophers have a bad habit of treating things as binary true or false and statistical answers are not acceptable. No philosopher I know has made any sense of Quantum Mechanics or natural selection so far and are completely beholden to science in modern times. The only philosophy that's worth pursuing these days is the philosophy of science itself, but even that is hitting its limits. I've been in too many debates where philosophers try to label science as "logical

Science was created because philosophy couldn't cut it. Galileo didn't bother trying to figure out the philosophical underpinnings of things rolling down planks or pendulum swings or the moons of Jupiter. He went straight to observations.

First, there is no "Scientific Method", with capital letters. There have been many philosophical attempts at trying to formally define science, but none are accurate and often fly in the face of how science is actually done.

If science doesn't exist before attempts to formalize science, then you are saying that Galileo wasn't doing science. The practice came before the theory and is a recorded historical fact. You demonstrate precisely the problem with philosophers - the theory override

Arguing whether science is a form of philosophy is like arguing whether the Game of Thrones TV show is an example of art. You don't necessarily have any disagreement about what science is (even though that's what everybody is focussing on); you have a disagreement on the definition of philosophy (which, like art, is notoriously hard to pin down).

You mischaracterize the debate. The debate is not about what either of those is, but whether science comes from philosophy, or developed as a complement/reaction to philosophy that has now far exceeded philosophy's capabilities. The corollary to that debate is the argument that if philosophy gave birth to science, whether philosophy is allowed to "pull rank" on science any time they hit a wall and claim credit for things as though science "owes" anything to philosophy for its existence. As though because th

Except "human speech" can be anything. Complex language started out simple and people were experimenting then as well. Learning how to make fire and tools requires experimentation that would approach what we would call research.

Oddly computer chess programs may already meet this criteria. The programs usually apply a weight or value to a move and a weight and a value to the consequences down stream of the move. But there are times when the consequences are of equal value at some event horizon and random choices must be applied. As a consequence sequences of moves may be made that no human has ever made and the programmer could not really predict either. As machines have gotten more able the event horizon is at a deeper level. But we might reach the point at which only the player playing white can ever hope to win and the player with black may always lose. We are not in danger of a human ever being able to do that unless we alter his brain.

"The machine's designers must not be able to explain how their original code led to this new program". I know plenty of programmers that can't explain how the hell their code managed to produce certain results, and trust me it has nothing to do with the servers mysteriously developing AI.

http://en.wikiquote.org/wiki/I... [wikiquote.org]
Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a... canvas into a beautiful masterpiece?
Sonny: [With genuine interest] Can you?

Just because someone sets some random people up for a five minute interview with a chatbot doesn't mean they're running a Turing Test.

Give people enough time to conduct a proper conversation, hell give them time to ask the chatbot for some original content. Do that and you'll be running a real Turing Test.

The reason you keep hearing about these simplified Turing Tests is those are the only tests people run because those are the only tests computers can pass. But passing a true Turing Test is still a great standard for detecting real AI, and something no one can even approach doing yet.

The great thing about the Turing test was that it was a black box. It did not depend on assumptions about what the designers knew, or what hardware was used, or the like. And so far the only test trials I have heard of have been carefully arranged one on one. Give us a dozen Ukranian teen-agers, and pick the one (or two) which are non-human - that's a better test run.

But, of course, the ultimate test of machine intelligence is when the computer can sue your ass off and win in the Supreme Court.

The Linda Lovelace test is when you make love to a lady and you can't tell if she's a human or a robot. I live in Thailand, and I have been involved in the Linda Lovelace test many times - including my ex-wive.

A guy told me some 20 years ago that he read about an artificial life experiment in which a specially designed operating system was created to allow programs to execute code and, like computer viruses, reproduce themselves while competing for the resources to do so. He said the result was a program that copied itself very efficiently in a manner that the researchers found very hard to understand and was totally unexpected.

Sadly he couldn't explain the details and didn't know the experiment, but if what

This business of the developers not knowing how it works. It reminds me of the question "How can God create a being that sins. Doesn't that make Him responsible?". One way to answer that is that God withdraws his authority within the a locus that we call the "soul". What happens there isn't his action. This implies that while knowingly taking actions that lead to wrong is immoral, withdrawing your power from a particular locus and opening things up to potential wrongs is not immoral.

I've written music generators that produce "pleasant" music from scratch (by following time-tested harmonic, chord, and rhythm patterns and ratio's). The music may pass the Lovelace test, but will probably never win any awards.

The machine's designers must not be able to explain how their original code led to this new program.

So if we finally figure out how the human brain works, it will fail the Lovelace test just because we know how it works? A silly rule.

the lovelace test is not a great test if a machine has to create something original, all by itself, as a lot of real humans can't even do that, so a lot of humans wouldn't even pass the lovelace test..

It was passed as defined: 10 out of 30 judges (lay people) thought they were talking with a human when they were talking with a machine in 5 minute chat sessions. Whether passing this is any way significant is up for debate, but the test was passed.

The Turing Test was not passed, and the only people who claim it was are ignorant reporters looking for an easy story with a catchy headline and tech morons who also believe Kevin Warwick is a cyborg.

The test was rigged in every way possible:

- judges told they were talking to a child- that doesn't speak English as a primary language- which was programmed with the express intent of misdirection- and only "fooled" 30% of the judges.

And, even after all that, Cleverbot [cleverbot.com] did a much better job back in 2011 with a 60% success rate.

This Eugene test outcome was a complete farce -- something to remind everyone that Warwick still exists and to separate the ignorant and sensational tech news trash rags from the more legitimate sources of information.

The Turing Test was not passed, and the only people who claim it was are ignorant reporters looking for an easy story with a catchy headline

Indeed. There's a lot of misinformation out there about what Turing originally specified. The test is NOT simply "Can a computer have a reasonable conversation with an unsuspecting human so that the human will not figure out that the computer is not human?" By that standard, ELIZA passed the Turing test many decades ago.

The test also doesn't have a some sort of magical "fool 30%" threshold -- Turing simply speculated that by the year 2000, AI would have progressed enough that it could fool 30% of "interrogators" (more on that term below). The 30% is NOT a threshold for passing the test -- it was just a statement by Turing about how often AI would pass the test by the year 2000.

So what was the test?

The test involves three entities: an "interrogator," a computer, and a normal human responder. The interrogator is assumed to be well-educated and familiar with the nature of the test. The interrogator has five minutes to question both the computer and the normal human in order to determine which is the actual human. The interrogator is assumed to bring an intelligent skepticism to the test -- the standard is not just trying to have a normal conversation, but instead the interrogator would actively probe the intelligence of the AI and the human, designing queries which would find even small flaws or inconsistencies that would suggest the lack of complex cognitive understanding.

Turing's article actually gives an example of the type of dialogue the interrogator should try -- it involves a relatively high-level debate about a Shakespearean sonnet. The interrogator questions the AI about the meaning of the sonnet and tries to identify whether the AI can evaluate the interrogator's suggestions on substituting new words or phrases into the poem. The AI is supposed to detect various types of errors requiring considerable fluency in English and creativity -- like recognizing that a suggested change in the poem wouldn't fit the meter, or ir wouldn't be idiomatic English, or the meaning would make an inappropriate metaphor in the context of the poem.

THAT'S the sort of "intelligence" Turing was envisioning. The "interrogator" would have these complex discussions with both the AI and the human, and then render a verdict.

Now, compare that to the situation in TFS where the claim is that the Turing test was "passed" by a chatbot fooling people. That's crap. The chatbot in question, as parent noted, was not even fluent in the language of the interrogator, it was deliberately evasive and nonresponsive (instead of Turing's example of AI's and humans having willing debates with the interrogator), there was no human to compare the chatbot to, the interrogators were apparently not asking probing questions to determine the nature of the "intelligence" (and it's not even clear whether the interrogators knew what their role was, the nature of the test, whether they might be chatting with AI, etc.).

Thus, Turing's test -- as originally described -- was nowhere close to "passed." Today's chatbots can't even carry on a normal small-talk discussion for 30 seconds with a probing interrogator without sounding stupid, evasive, non-responsive, mentally ill, and/or making incredibly ridiculous errors in common idiomatic English.

In contrast, Turing was predicting that interrogators would have to be debating artistic substitutions of idiomatic and metaphorical English usage in Shakespeare's sonnets to differentiate a computer from a real (presumably quite intelligent) human by the year 2000. In effect, Turing seemed to assume that he would talk to the AI in the way he might debate things with a rather intelligent peer or colleague.

Turing was wrong about his predictions. But that doesn't mean his test is invalid -- to the contrary, his standard was so ridiculously high that we are nowhere close to having AI that could pass it.

Turing was wrong about his predictions. But that doesn't mean his test is invalid

Imho it is.Suppose we manage to create a strong AI. It's fully conscious, fully aware, but for some quirk we cannot understand, it's 100% honest.Such an AI would never pass the Turing test, because it would never try to pass off as human, and any intelligent human could ask it questions that only a machine could answer in limited time.

Turing was wrong about his predictions. But that doesn't mean his test is invalid

Imho it is.
Suppose we manage to create a strong AI. It's fully conscious, fully aware, but for some quirk we cannot understand, it's 100% honest.
Such an AI would never pass the Turing test, because it would never try to pass off as human

That sounds like a legit point at first, but think about it for a sec. Programming a computer to lie and be evasive about its nature is easy, and many chatbots can already do that. Asking a strong AI "are you a computer?" or "what did you have for breakfast?" would not be useful for evaluating intelligence. Getting the AI to debate an intellectual topic, on the other hand, will be less likely to require deception but would be a better measure of intelligence. That's another fundamental point people miss: The point of the Turing test was to imitate human INTELLIGENCE, NOT to pretend to be a physical human.

A knowledgeable interrogator trying to evaluate intelligence would thus likely be more interested in asking intellectual questions, rather than queries just designed to test whether the computer can make up some nonsense about itself.

Good points. I would add one more- people lie. There is nothing to stop the human comparison from lying and saying he was a computer as well. If both say they are a computer that should level the playing field so that they both need to judged on the merits of the debate.

Exactly. That's part of my point. A lot of people are acting like the test was "passed" by an AI pretending to be a Ukranian teenager conversing in his non-native language and acting like an evasive weirdo. Turing's standard for "intelligence" was obviously much higher. It sounds like his AI would probably be pitted against an adult human from the top 5-10% of intelligence in his test.

And isn't that a potential standard for evaluating when true AI has arrived? No one would have cared about Deep Blue

I think Watson would be able to give it's real age by finding the information rather than recalling it, although it might get confused by progressive versions. AI can also produce a picture of a generic rabbit, or cat as the case may be [blogspot.com.au].

The thing that Watson (and AI in general) has difficulty with is imagination, it has no experience of the real world so if you asked it something like what would happen if you ran down the street with a bucket of water, it would be stuck. Humans who have never run with a bucket of water will automatically visualise the situation and give the right answer, just as everyone who read the question have just done so in their mind. OTOH a graphics engine can easily show you what would happen to the bucket of water because it does have a limited knowledge of the physical world.

This is the problem with putting AI in a box labeled "Turing test", it (arrogantly) assumes that human conversation is the only definition of intelligence. I'm pretty sure Turing himself would vigorously dispute that assumption if he were alive today.

The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include. We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane. The conditions of our game make these disabilities irrelevant.

By definition, one in three means it failed to convince the average layman, when it gets better that one in two I will give it a pass.

Personally I think it's achievable today [youtube.com] but as much as I admire Turing it's entirely irrelevant to the question of intelligence. It's mostly philosophical masterbation by people who misunderstand the modern definition of intelligent behaviour. For example I can't get a sensible reply when asking an octopus about it's garden but there is no denying it's a remarkably intell

They are shifting them again. This new test includes this requirement: The machine's designers must not be able to explain how their original code led to this new program. So now anything we understand is not intelligence??? So if someone figures out how the brain works, and is about to describe its function, then people will no longer be intelligent? Intelligence is a characteristic of behavior. If it behaves intelligently, then it is intelligent. The underlying mechanism should be irrelevant.

So if someone figures out how the brain works, and is about to describe its function, then people will no longer be intelligent? Intelligence is a characteristic of behavior. If it behaves intelligently, then it is intelligent. The underlying mechanism should be irrelevant.

No.

you describe "behaviorism" which is a thoroughly discredited and reductive theory

the ***whole conversation*** is about ***the underlying mechanism***

the "Lovelace Test" is more rigorous, but how it will affect computing I cannot say, b

One of my friends is a philosophy post-doc and he told me many times that in philosophy the gold standard for intelligence is intelligent behaviour. Of course he has some footnotes to add, notably that intelligent things can appear to be bricks if you cut off all their actuators, but to say that this particular variant of ‘behaviourism’ as you call it is discredited is disingenuous. In particular, if one could hypothetically replace someone's brain with a computer and not know the difference the

That's a reasonable statement to make, and if you're disagreeing with that statement, you need to say why. Converting it to a strawman and then making a bald claim that the strawman is "discredited" is a cheap rhetorical trick. And then you go on to talk about free will, which has no direct relationship with intelligence anyway. OK, I get it, you want to turn the conversation around to being about free will, because that's your ax, but telling someone their perfectly reasonable statements are "simply wrong" is a shitty way to do it. OP's point, which you're deliberately missing, is that whatever intelligence is, it is not an observer-relative thing which demands that the observer be unaware of the mechanism. If you want to engage in debate with him, try addressing that specific point, rather than a bunch of points he never made about a subject he's not discussing. And if you want to talk about free will and about how behaviourism is "discredited" maybe you could at least make a couple of points in favour of that argument, for those of us who might be interested anyway. Maybe then we can see how your belief relates to what is actually being said.

Anyway, what you're both missing is the practical issue with "The machine's designers must not be able to explain how their original code led to this new program." The machine's designers can lie, or be incapable of coming up with an explanation despite one existing, so this is a completely ill-defined criterion - which is what we're trying to get away from.

I heard a great anecdote about this from an MIT proffessor on youtube [youtube.com]. Back in the 80's the professor developed an AI program that could translate equations into the handful of standard forms required by calculus and solve them. A student heard about this and went calling to see the program in action. The professor spent an hour explaining the algorithm, when the student finally understood he exclaimed, "That's not intelligent, it's doing calculus the same way I do".

One of Feynman's memoirs includes the haha-only-serious observation that mathematical theorems are either unproven or trivial, and this is simply a re-statement of the same principle.

And actually, there's a lot of speculation about whether colonies exhibit intelligence or consciousness (eg Hofstadter's Aunt Hillary, but also Jack Cohen & Ian Stewart's Heaven - they also did the Science of the Discworld series with pterry).

I don't think "a chatbot isn't AI and hasn't been since the 1960s when they were invented, whether you call it a doctor or a Ukrainian kid doesn't make any difference" counts as shifting the goalposts.

Furthermore, reproducible results are an important part of science. Let him release his source code, or explain his algorithm so we can reproduce it. Anything less is not science.

One of the things I love about programming is the moment you have to remind yourself that your program is simply executing algorithms that you told it. Depending on how clever the algorithms are it can appear as if the computer is thinking for itself. Programming allows you to encode intelligence in non-thinking machines.

No... programming does not encode intelligence in a machine. Intelligence indicates the ability to think for itself and come up with a creative answer that isn't part of it's original programming. When you write a program, all you are doing is telling the computer what to do given a specific input. There is no intelligence involved.