Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Part of the deal with Jeopardy! is that they will have as part of the 2010-2011 season be a televised episode in which record-breaking champ Ken Jennings will play against Watson, with a to-be-named-later champion in the third slot. This has been in the works since 2009, but the staff of the show finally thinks the system is ready for it's televised match.

One key factor is how the human behavior will change when prize money is at stake. Jennings has proven in numerous appearances on GSN that he's willing to play in any test of knowledge and the fact that he knew he was Jeopardy's first millionaire in regular season play didn't stop his long Jeopardy! run. He also studied for the show, particularly alcoholic beverages (which he doesn't drink) because he had seen the Potent Potables category on TV.

But, what about that player-to-be-named later? Will they know more than the grad students... and play the game not as if it's for points but real dollars?

H&R Block's mainframe system has computed that all of the the offers on Deal or No Deal are bunk, you're always statistically better off sticking with your case through most of the game... but they're still unsure whether you should take Howie's offer to switch your case with the last one left in the hands of the models.

A tragedy as the Stanford-built computer made to play Russian Roulette was caught not in the lead when time was called in the second round, and was dropped by

H&R Block's mainframe system has computed that all of the the offers on Deal or No Deal are bunk, you're always statistically better off sticking with your case through most of the game... but they're still unsure whether you should take Howie's offer to switch your case with the last one left in the hands of the models.

That's interesting to me for a couple reasons.

I loathe that show. I'm not part of the anti-pop-TV brigade, I just find it incredibly boring. I tell my wife it's like watching someone throw dice against the wall for 42 minutes (DVR!), interspersed with crap dialogue.

Having admittedly never thought about the last two cases scenario (the one you started with, and a final model's case). I would have thought that the probability maths behind this would be pretty simple.. do you have a link for that H&R block

I don't care for the show myself, but don't forget about the nonlinear utility function of money. As an illustration, given that I'm comfortable but not wealthy:

If given a choice between a guaranteed $400 and a 50-50 shot at $1,000, I'd choose the latter. The money wouldn't have a major impact on my life, so I'd go for the option with the best expected return.If given a choice between a guaranteed $400,000 and a 50-50 shot at $1,000,000, I'd take the guaranteed $400,000, even though the expected return of the latter situation is $500,000. $400,000 would give me a substantial amount of freedom and security. An additional $600,000 beyond that would be nice but would provide relatively few benefits compared to the initial $400,000.

Now, if I were already a millionaire, I'd most likely choose the 50-50 shot at $1,000,000.

A good formula for "Expected Happiness", from the Wizard of Odds [wizardofodds.com] (4th question down):

When the prizes become life-changing amounts, the wise player should play conservatively at the expense of maximizing expected value. A good strategy should be to maximize expected happiness. A good function to measure happiness I think is the log of your total wealth. Let's take a person with existing wealth of $100,000 who is presented with two cases of $0.01 and $1,000,000. By taking "no deal" the expected happiness is

That $400,000 is probably closer to $200,000, and it means you're going to need to fight off a simply incredible number of get-rich-quick artists.

Being comfortably well off is extremely desirable. Being quite rich has lots of good points. Winning a huge monetary prize is usually quite destructive. (I'm not sure, though, that $400,000 counts as huge. That's less than the cost of a good house.)

- You pick one of the [26] cases at the beginning. Instead of focusing on amounts let's call one a "winning" case (top prize), all the others losers.

This is a bad approach, since the amounts have a lot to do with it. The amounts mean that there isn't a "winning case" and 25 "losers", but instead 26 cases of various degrees of "win".

Also, the offers are some kind of function of the average of the remaining cases (haven't actually checked mathematically, but I'm sure it amounts to something like that), so which "losing cases" you eliminate changes your strategy drastically.

In short, the best strategy for the game is not to "pick the winning case", but

Don't the same forces at work in the Monty Hall Problem [wikipedia.org] make it not 50/50, or does the fact that the eliminated "losers" were randomly selected without foreknowledge that they're losers make it 50/50?

and see students from the MIT Robotics Lab test their machine that they say can avoid the Bankrupts and find that Million Dollar wedge on the Wheel of Fortune

With a little empirical testing, it should be possible. Bankrupts are at known positions on the wheel, and you know the starting location. If you can model the physics of the wheel well enough, you can easily avoid them. (Unless the wheel has external influences - e.g., a brake and a motor that randomly apply and remove energy from the wheel making it

It was ridiculously easy to beat - so much so that I wondered if there was something wrong with the website. My final score was 59-11, and that was only because I typoed two of the answers (Watson is, I'm sure, a much better typist than me).

Even when I mentally tried to score it for "late" correct/incorrect answers (the website shows you Watson's answer even if you answer correctly), I'm pretty sure Watson would have ended up in the high-teens.

Well yeah it's easy to beat, it's not playing. This is just a canned flash demo of answers (and possible answers) that it came up with. It gives you the opportunity to answer first every time, so it is never going to 'beat' you if you know the answer. Looking at the possible answers it considered is way more interesting than trying to beat this demo. For instance, one of the 'what me worry' answers was 'scratching' (which it did not get right), but one of the answers it considered (along with eczema) w

I killed it, too, but mainly because of the rules of the game in which I get first crack at the answer.

Still, I think the point is that it's impressive the number of questions it gets right. I really didn't miss very many. My mental tally had it getting around 70% or so, which is pretty damn good. I got around 80%, but again, I had first crack at the answer, so Watson could have only possibly scored around 20% of the answers at best. If it tallied your score and Watson's score without actually competing

Back to the drawing board my arse. You may have beaten the web page game easily, but consider the fact that you wouldn't get first shot at every single question in a real match. If you assume 50-50 on the ring-in time when both players think they have the right answer, I think Watson looks pretty good on this game (although they wouldn't have put an example of a particularly bad match on the web).

It also has a probability threshold below which it won't attempt a response. The real machine must be set up

If it has been then the breed doesn't matter: from what I read those birds that Hollywood stars clean for publicity only live a few days anyway with all the oil they've ingested from trying to clean themselves.

I wonder if a website where people subscribed to artificial friends, shrinks, lovers would be a viable business model if it was as good at mimicking these things in conversations. An Eliza frontend on this Jeopardy beast might work. Plus Eliza was always giving questions as answers too!!
I'd rather talk to a computer program about certain things anyway......and this one *would* be connected to the internet and would hone into your tastes quickly.

Anybody can write an quiz game that beats you every time. What is (might be) impressive is how you are beaten. If I understand this, the impressive part is supposed to be that the system processes the questions as natural language, generates a number of matches, then chooses the one that it "thinks" matches the sense of the question most closely.

That's a pretty tall order, but still as a demo the app isn't impressive, because the app designer chooses the

Both Watson and Google relied on statistically analyzed word occurrences with a veneer of procedural/rule intelligence. Waston seeks the correct answer, while Google the most popular document containing the answer. Neither incorporate "deep" understanding like CYC's rule ontology. But it may not be that necessary if the statistics are large enough.

It's possible, though, that the statistics can never be "large enough". I remember seeing an article here about natural language speech recognition (oh, here [posterous.com] it is) and about how many companies had hoped to continually feed more and more examples of language use into a computer and, through statistical analysis, be able to develop human-level speech recognition. The article indicated that these companies found a point after which additional examples didn't help. The statistical analysis (at least the met

...it's a "pre-recorded" session. I played the NYTimes game twice in a row. The first time, the computer beat me. The second time, given the exact same questions, I handily beat the computer. I learned from my mistakes and was able to apply that knowledge to a similar scenario improving the outcome on my end. It's definitely an interesting study.

Because, like Deep Blue at its time, it requires much more computing power than today's typical web site or PC. Chess has finally been solved to the point that there's now unbeatable AIs available to the average user (assuming it gets to move first) but Jeopardy! hasn't, which is why this is novel. It'll take several more years of computing power increases before we'll be playing this AI on our home video game systems.

This is a given, since there aren't even any AIs yet, so there could be no unbeatable AIs.

I admit that as a programmer this is a bit of a pet peeve of mine. But people: computers are NOT "intelligent". At all. They do what they do by performing the same specific instructions over and over and over again. That's what computers are good at. Granted, they do more complex things today, but that's just due to clever programming and the ability to do many more instructions many times faster than before (i.e., better hardware and software). But that's just quantity. What it lacks is quality. No matter how clever a combination of processors and program may seem at some specific task, it totally lacks the quality we call "intelligence". To call anything running on a computer today "intelligent" is to undermine the word itself. You might as well call a rock an airplane.

Researchers in this field 20 years ago would have been appalled at what people today refer to as "AI". Of course they would also be appalled at the lack of progress in that same field, but that's another matter.

I am not pointing fingers at the posters here. They are just using "AI" in the way it has become commonly used. But that is an erroneous use and I would be happy to see the practice stop. It gives people the wrong idea.

If we (erroneously) call what occurs today "intelligent", then if something ever really did become intelligent, what would we call it?

Intelligence isn't a discrete property, nor is sapience, which I think is what you actually mean. Anyway, it's called Artificial intelligence, i.e. not real, and to answer your final question I'm inclined towards the Synthetic Intelligence moniker.

"Artificial" in this context was not meant to mean "not real". It is used as in "created artificially". Look up the word "artifice".

Certainly, given research, you could call a dog or cat "intelligent" to a degree. There does seem to be a "quantity" element involved. But what I am saying is that no quantity of the quality of computing we are capable of today even remotely approaches intelligence.

I prefer the term "Artificial Stupidity" or "Artificial Stupidity in Software" (ASS), but I'm afraid much of Congress wouldn't even consider funding studies in this direction for fear that they'd finally have competition.

Joking aside, that's a pet peeve I don't fully understand partially because we've already lost the war on defining "hacker;" the mainstream press has managed to twist and desecrate the latter in spite of the campaigns on our side of the fence. There isn't much hope for AI "purists" in this

I prefer the term "Artificial Stupidity" or "Artificial Stupidity in Software" (ASS), but I'm afraid much of Congress wouldn't even consider funding studies in this direction for fear that they'd finally have competition.

There would be no competition. I highly doubt we could create an ASS that was as dumb as Congress.

To call anything running on a computer today "intelligent" is to undermine the word itself. You might as well call a rock an airplane.

I didn't realize "intelligent" had such a clear definition that you could really say anything meaningful about whether a machine was "intelligent" or not.

If we (erroneously) call what occurs today "intelligent", then if something ever really did become intelligent, what would we call it?

I don't know.. perhaps we'd make a bit of progress and realize that "intelligence" isn't some nice single concept to just nail down like mass that we can all agree on what is is and isn't. We might even come up with 10 very different words to describe something we might now use the word intelligence about, since we might actually have a better grasp on what it actually is. If you ask me, intelligence is more about human ego than any real hard definitions. In many peoples minds computers can never be intelligent because it would bring our self opinions down a notch or two. That's why many people were sooo upset about Kasparov getting schooled by Deep Blue 10+ years ago, and then made up a bunch of excuses why it wasn't fair.

Whether a machine "intelligently" plays Chess, or is "intelligent" is a stupid question. What's more interesting is how we might accomplish the same task in different ways than our brains might do so. 40 years ago nobody ever thought a computer could be programmed to play even a decent game of chess. These days it's surpassed us. I think that says more about what we think is "intelligent" than anything else.

I don't know.. perhaps we'd make a bit of progress and realize that "intelligence" isn't some nice single concept to just nail down like mass that we can all agree on what is is and isn't. We might even come up with 10 very different words to describe something we might now use the word intelligence about, since we might actually have a better grasp on what it actually is. If you ask me, intelligence is more about human ego than any real hard definitions. In many peoples minds computers can never be intelli

How convenient! A theory about intelligence which means that we have actually already created AI!

Not paying attention. This isn't even anything approaching a theory. It's a critique of how people think about intelligence and what it is. A theory would have to entail a definition of what intelligence is. I'm saying we don't even have that, and that it's likely not something that's really possible.

Computers are not intelligent because they are unable to reason. They iterate until they achieve an optimal solution to a specific set of rules.

Could you define "reason"? The AI field has worked for a while (until the 80s) to build machines that reason. There were some successes with expert systems and things like that (which, given sufficient data could "reason" according to standard definition). The problem with them is that they need complete information, so they never really made it out of the lab. Current works have turned instead of "reasoning" systems to Bayesian inference engines which use complicated statistical methods and approximations

If you ask me, intelligence is more about human ego than any real hard definitions. In many peoples minds computers can never be intelligent because it would bring our self opinions down a notch or two.

This is the typical rationalization that has come up in the AI world once it divided into strong AI and weak AI. Strong AI is hard, so some researchers started saying weak AI is good enough.

But there is a difference. We don't know everything about human intelligence, but we do know some things. Humans are creative, they come up with new and interesting things, we can recognize when we found interesting things, Our pattern recognition capability is unbeaten.

As humans, we do exactly what physics mandates we do. Unless you're purporting that the human brain uses some sort of hypercomputation or that there's something special (ie outside of our current understanding of physics) about what neurons do, you're not being consistent.

That said, I understand where you're coming from; most AI research is in very narrow domains and has no intention or hopes of solving the problem of achieving human-level intelligence (Watson is an example of narrow AI, as it clearly lacks a genuine understanding of the question or the english language). But the fact remains, that is how the term AI is used.

There's a growing separation between this "Narrow AI" and the kind of AI you seem to be hoping for, Artificial General Intelligence (AGI). There are some AGI projects out there, such as the open source opencog [opencog.org]. Since there's no hope of people stopping calling things like computer chess AI, I prefer to use the term AGI whenever referencing "real" AI.

That was exactly the argument of Roger Penrose, et al., with which I strongly disagree.

They went to great lengths to show that since quantum effects are deterministic, and since thought is ultimately dependent on quantum effects, that "free will" cannot exist.

But there is a fundamental problem with that argument: in actuality, many quantum effects are not deterministic, they are probabilistic. They are not predetermined. Further, it has been proven that the presence of an observer or measurement can a

Any computer with a hardware RNG device, such as one that keeps track of radioactive decay, can take advantage of non-deterministic quantum effects. Just as you claim the human brain is unpredictable, a computer can be made to be so.

Furthermore, I hardly consider the influence of outside quantum effects to be a sufficiently satisfying explanation of "free will". To me "free will" means the ability to non-deterministically decide without outside influence (which quantum mechanics would qualify as in my boo

It's a common mistake that comes for a history of debating this question in a religious contexts, as apposed to a natural context.

"it has been proven that the presence of an observer or measurement can alter the probabilities."no, it hasn't. Measuring a quantum event causes collapse, but in no way effects the resuts; which seem to be purely random.

If we can make quantum events collapse in a predictable way, we could send information faster then light

I did not confuse lack of free will with predeterminacy. Penrose and his collaborator did. Not only did they mix the two, they insist that one necessarily dictates the other. I do not buy their argument.

no, it hasn't. Measuring a quantum event causes collapse, but in no way effects the resuts; which seem to be purely random.

Yes, it has. Among others, Heisenberg showed that it is not possible to measure some quantum states without altering those states (such as, for example, both the position and velocity of a photon). Depending on the method of measurement, the alteration need not be random at all. In fact it can be quite deli

"If we can make quantum events collapse in a predictable way, we could send information faster then light."

Yes, and it has been done. Look up the recent experiments, in which alteration of the spin of one half of an entangled pair instantly changed the spin on the other particle... which was a full meter away.

There may not be any practical use of this technique yet, but yes, it has been done.

To shorten your post: AIs will exist when a computer can write code for itself to run without interaction from a user. Given a problem, and having no prior knowledge of a solution, nor a way to arrive at the solution, an AI will be capable of creating a set of instructions to solve that problem.

To shorten your post: AIs will exist when a computer can write code for itself to run without interaction from a user. Given a problem, and having no prior knowledge of a solution, nor a way to arrive at the solution, an AI will be capable of creating a set of instructions to solve that problem.

That's a rather restrictive definition of intelligence, I think. How do humans solve problems we confront/are given? We certainly have prior knowledge of a solution, since we have a lifetime of experience which will no doubt include some things we think might be relevant, even if that's just how to move our arms and legs. We also have a definite way to arrive at a solution, which is to try some of those things and see what happens, then go from there. This will either make the problem unsolvable (eg. attemp

You really need to seriously think about your definition, either that, or notice a bit about how people operate in the world. (Or claim that "intelligent entities" is, and probably always will be, a null set.)

What exactly is so different between a brain and a computer? Brains might still have an edge in the parallel market right now, but don't expect that to last forever. Both are computers, and both should theoretically be capable of the same things. Who's to say that in the future we won't be emulating wetware on massive synthetic computation substrates?

The only possible reason for human intelligence to be some sort of un-reachable goal for computers is some sort of non-scientific concept of a "soul". If y

It's the quality of what the brain does, its ability to create inferences and free deductions (among other things) that distinguish it. That's what I was saying: the human (or other evolved organic) brain does not just execute the same simple instructions over and over again. There is a fundamental difference in how they operate, and the attempts to simulate one using the other has so far fallen pretty flat. And that's understandable, because they just don't work the same way.

On a theoretical (read: computational theory) level, they are both the same. As Von Neumann stated: "Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin." Human brains, like computers, exist in the real world and have to obey the same set of rules. Baring the existence of the soul, they are both completely deterministic (with the exception of randomness from quantum mechanics, which both can take advantage of). Humans are not Turing Oracles, if anything

Changes in hardware, which do happen, can equally be described as changes in software. A Turing machine that is able to change it's own hardware is not more powerful than one that cannot, since it is quite trivial to simulate a TM that modifies it's own hardware on a TM that cannot.

Nobody is arguing that the human brain and a computer are the same styles of computer, just that they are both computers (and therefore not as different as many would like to think).

Strictly speaking, if the brain is a computer, then "what we are" would be the software. Our teaching, reprogramming, and going crazy are all, or could be all, software 'features'. Any strictly "hardware" features that might be present in the brain could be simulated in software just as easily.

Computers are stable because we program them to be, and there are certainly plenty of examples of software that is anything but stable:)

The real issue is that AI research has mainly discovered that Intelligence is not well defined, and everything we thought was difficult (e.g. Playing chess) is relatively easy, and everything we thought was easy (e.g. Understanding Human Language) is horrifically difficult

A human child can walk, see interact and understand it's environment, hold a conversation... none of these can be done even more than adequately by machines.. (Walking is ahead of the rest so far) but a cheap chess computer can beat most

Well essentially what has happened is that AI has become a term to mean any mechanical behavior which can adapt to different circumstances. It may be because of overuse of the term, but I've seen people use the word "AI" to any kind of computerized decision-making that is clever enough that people can't immediately tell how it's done.

However, I agree that this isn't what I consider "AI". In my opinion, we haven't built real AI when we've designed a computer that can beat all human players in Jeopardy. W

It's bleaker than you describe. There will never be artificial consciousness. Computers and their software will get better and better at fooling us into believing they are intelligent or consious, but computer science and philosophy of mind has already proven that artificial consciousness is unattainable.

And that is my point in a nutshell: what we could really use is a precise definition of intelligence. But I am of the opinion that when one really thinks about it, what one means by "intelligence" is not what is being accomplished by these "AI" systems, even if the popular press calls them that.

Computer chess is merely at the point that if you haven't been on the cover of Chess Life, you're going to get trounced. Even if you have, you're going to lose more than you win. The current situation is that Deep Rybka 2010 [chesscentral.com] has an ELO rating around 3150. That's running on a 4-core AMD-64 desktop machine. The all-time human record is 2851, which Garry Kasparov had in 1999-2000.

The ELO ranking of machines is not really directly comparable to humans, because machines play a lot more between themselves than with humans. This is what the wikipedia page has to say about the ELO ranking of machines:

These ratings, although calculated by using the Elo system (or similar rating methods), have no direct relation to FIDE Elo ratings or to other chess federation ratings of human players. Except for some man versus machine games which the SSDF had organized many years ago (which were far fro

It's impossible to fully all possible games of chess. The game tree complexity is about 10^123, whereas the number of atoms in the universe is thought to be somewhere between 10^79 and 10^81. Thus, it's impossible to brute force the game since you can't store all the possible states.

If, however, we ignore this, then the answer to your question would be "it depends on how fast it could calculate the results." Some hypothetical computer with sufficient memory and a sufficiently fast processor would be unbeatable using a brute force algorithm by the definition of brute forcing. However, as already explained the "sufficient memory" part is pretty much impossible

It's impossible to fully all possible games of chess. The game tree complexity is about 10^123, whereas the number of atoms in the universe is thought to be somewhere between 10^79 and 10^81. Thus, it's impossible to brute force the game since you can't store all the possible states.

While you're right that it's impossible, the reason you give is wrong: You wouldn't have to store all the possible states at once. After you've determined that you cannot win with a certain move, you don't need to store all thos

The real problem is time. Even if you could check one move per Planck time (the shortest possible time interval, ca. 5*10^-44s), you'd still need about 5*10^79 seconds, or about 1.5*10^72 years. For comparison, the universe is about 1.5*10^9 years old.

If you could somehow convert all of the matter in the universe into a massively parallel computer running at that speed, with each CPU having a budget of ~40 million particles, then you'd have a computer that could play chess perfectly, providing each move in about 24 hours at first, and then probably speeding up a bit as the game progresses and there's less of the state space to check.

Because different problems require different ways to answer them. 'Watson' seems to be able to handle information based questions by searching for certain pieces of information based on what it already knowns (so no true 'problem solving by the looks of it, still reading the article), but by the sounds of it, it can't handle 'visual' questions (like looking at a banana and an orange and telling the difference). That takes a different type of problem solving. 'Lucy' [radio-weblogs.com] on the other hand was supposed to be able

If you had even read the first 3 paras of the article, you would know that YES it can answer questions posed like that. The whole point of WATSON is that it has very advanced natural language processing, enough that it can even understand the puns and strange grammar of jeopardy questions.

Oy, you completely missed the point. The questions (whether computers can think or submarines can swim) are asking the wrong questions. They presuppose that the object (computer or submarine) can or will accomplish the task (thinking or swimming) in the same way that people or animals perform the task. They don't and they won't, so it's a bad question. The questions should be asked relative to the task (some goal) that the object can accomplish. Much of the discussion above is not 'can computers think'