Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

destinyland writes "MIT's Media Lab is building 'a game that designs its own AI agents by observing the behavior of humans.' Their ultimate goal? 'Collective AI-driven agents that can interact and converse with humans without requiring programming or specialists to hand-craft behavior and dialogue.' With a similar project underway by a University of California professor, we may soon see radically different games that can 'react with human-like adaptability to whatever situation they're thrust into.'"

Yeah, things like this would happen, but also, how easy would it be for a small but dedicated group of pranksters to deliberately behave in odd, amusing or offensive ways to train the AIs?
AI09 says "I herd u leik tentacle pr0n"

Yeah, things like this would happen, but also, how easy would it be for a small but dedicated group of pranksters to deliberately behave in odd, amusing or offensive ways to train the AIs?
AI09 says "I herd u leik tentacle pr0n"

This already happens. My wife plays Age of Empires II a lot, and the AI almost always resigns when it's clear my wife is going to win (even if the AI still has a fair amount of its force still intact).

exit() is a standard C-library function ends** the program, and control flow from the main program ends right there at the call. There are "atexit" hooks which can be called, and memory deallocation etc. will be done by exit().

In nicer languages than C that have exceptions, you often also have try...finally blocks, where you can guarantee that your cleanup code will be called, even if you call some function which calls exit(). Essentially, it gives you nice atomic/transactional operations, at every level

In nicer languages than C that have exceptions, you often also have try...finally blocks, where you can guarantee that your cleanup code will be called, even if you call some function which calls exit().

No, I mentioned atexit, but that's only on program exit. try...finally can be used anywhere you want some work done atomically. The closest C has is setjmp and longjmp, but they're scary enough that I've always avoided them, even though I'm happy enough with assembly, whereas try...finally is very clear and usable.

In the game you're an "animate" inspector, you judge robots disguised as humans to see if they pass the turing test.
The whole game consists of you questioning and interacting with a character called Galatea, who may or may not be an animate.

So instead of taking advantages of the AI's known weaknesses to get ahead in the game, we will now have to "train" our digital opponents by using a consistent tactic until they evolve to counter it, then switching to an alternative tactic, and repeating the process at regular intervals.

Artificial intelligence came a step closer this weekend when an MIT computer game, which learnt from imitating humans on the Internet [today.com], came within five percent of passing the Turing Test, which the computer passes if people cannot tell between the computer and a human.

The human tester said he couldn't believe a computer could be so mind-numbingly stupid.

LOLBOT has since been released into the wild to post random abuse, hentai manga and titty shots to 4chan, after having been banned from YouTube for commenting in a perspicacious and on-topic manner.

LOLBOT was also preemptively banned from editing Wikipedia. "We don't consider this sort of thing a suitable use of the encyclopedia," sniffed administrator WikiFiddler451, who said it had nothing to do with his having been one of the human test subjects picked as a computer.

"This is a marvellous achievement, and shows great progress toward goals I've worked for all my life," said Professor Kevin Warwick of the University of Reading, confirming his status as a system failing the Turing test.

The idea of an AI that learns from the players sounds great when you're talking about a bot for Multiplayer Shooter 2010 developing tactics and strategies without explicit programming, or an NPC partner in a stealth gaming learning how not to bash their face into walls and then walk off a cliff into lava. Awesome, bring on the learned emergent behavior!

But dialogue? Oh lord no, please don't let the AI's learn how to "converse" from players. Because the last thing I need is to have AIs in games screaming "Shitcock!" or calling me a fag a thousand times in a row with computerized speed and efficiency.

I still think hand tuned AI when it comes to games matters since processing power is limited, also the real problem comes from having the AI come up with models in order to effectively understand what the opponent is doing. Right now most difficult AI's in games like RTS get special cheats instead of using tactics since "fair" AI's get wooped, AI's in games usually only have reaction time, cheats or outnumbering the player as their advantage.

I've been wondering about this. After all, the human brain is not much more than a glorified rules engine. We learn by imitation, and we improve through reasoning (calculation). Computers are obviously capable of the latter, but nobody's managed to get the former quite right.

This is because computers are very precise--or really, as precise as the floating point unit allows them to be. That is to say, they can perfectly duplicate information. This means that their observations are very precise. But they have

To be fair, that's a problem with their game design, not their bot-detection mechanism. Many times when I played the game I felt like I myself was a bot. They don't use the term "healbot" for nothing. If a bot can play your game really well (excepting aimbots), then your game probably isn't very fun.

<begin valley girl impression>Did anyone watch the Terminiator TV show, I mean hello! Skynet started as a like a chess game or something. OMG, are they like retarded or something? I for one don't want like a super smart computer thingy nuking me then sending its icky robots after me. Like eww!</valley girl impression>

I always thought it would be interesting to create a project like this with a chat engine. Take a major chat engine and have a "Submit to AI" option where the AI would parse the conversation between you and a friend so it can record questions and responses in an overlapping matrix of possibilities and calculate the probability of what the response should be by historical conversations of the same nature. You should get impressive test results with a large enough set of data.

So this will be like a wikipedia bot: It represents a modicum of intellect as learned from the internet. It approaches a bell-curve of intelligence meaning it'll have an IQ of 100, which compared to any intelligent person is dumb as hell.
I wrote a bot that simulates internet users. It just yells "COCK!!!" at random intervals.

If the AI Agents are learning to mimic human behavior by observing how they play a game, then the game design clearly already exists. Therefore, what is described in the article is certainly not anything even remotely close to "games that design themselves."

What kind of lame joke is that? Having a lot of storage is now limited to the Microsoft crowd? Can Linux not handle 2TB? My computer at home has a 2TB RAID array. Is it necessary to work for Microsoft if you want to run a TB or more of storage? Most NAS devices are 1TB or more.

Hell, Seagate has a 1.5TB Barracuda drive for less than $150. So are you saying that you need to work for Microsoft in order to afford a $150 drive, or are you saying that only Windows is capable of using a drive that size? I'm

What kind of lame joke is that? Having a lot of storage is now limited to the Microsoft crowd? Can Linux not handle 2TB? My computer at home has a 2TB RAID array. Is it necessary to work for Microsoft if you want to run a TB or more of storage? Most NAS devices are 1TB or more.

Hell, Seagate has a 1.5TB Barracuda drive for less than $150. So are you saying that you need to work for Microsoft in order to afford a $150 drive, or are you saying that only Windows is capable of using a drive that size? I'm confused where you think the humor is.

It was a joke about code bloat, of which Microsoft has been a leader for quite some time. But you are right in that now I could say Mozilla, or many other places. And while size goes up, transfer speeds do not. That is why so many operating systems take so long to boot, and so many programs take so long to load. You thinking of "space is cheap, use it all" doesn't factor in the other costs, like speed, power use, and the fact that I may want to store other things too... Efficiency is a good thing.

To an IT professional (most of slashdot), $200 for this sort of technology is rather trivial, especially considering many of us have seen companies pay over a million dollars for the same sort of capacity a few years back.

If you earn $80k/year and you use the drive for 5 years, you're talking about spending 0.05% of your income on it. Trivial.

But it can't copy our illogical decisions. Because our Illogical decisions are just based on poor logic.

You can program a computer to make a mistake - but its not the same.

What makes you think they would explicitly program in the rules of logic? Why couldn't the program be designed to find them out itself, through trial and error, just like a human does? In such a case, why couldn't the program develop poor logic?

Why?Because decades of AI research and countless "breakthroughs" have failed to deliver upon just that.

Oh crap, you're right. After "decades" of research trying to replicate the functionality of the most intricate and complex piece of machinery in the solar system, it's probably best if we just give up. After all, anything this hard couldn't be worthwhile.

Are there any examples of a living being which does not spend the majority of it's life parroting or applying the behaviour of others?

I'd contend that watching and mimicing others is the most effective method of learning. In fact, it's the ability to take and apply this learned knowledge to other situations that seperates the truly intelligent from the "average" in the world.

Because programming -IS- Logic. If you tell the program to do soemthing at Random, its not a very good AI. If you tell it to do the most strategically sound plan, it doesn't vary much at all.

You tell it to try to learn the rules, and make the best decision that it can.

Consider AI for chess. The best AI can beat any human because it can spend the processing power to look, say, 25 moves into the future. When the computer considers all possible moves and for each one looks at all possible next moves, next moves, etc, for 25 turns, it's going to be able to quantify which move it should make now to have the best chance at winning. When you download a chess game and you can set the difficulty, the

In fact, feeding bogus data to the AI is one of the realistic ways to limit, say, a racing game's agents - if they don't see the post in front of them because they aren't spending enough time per frame watching the road and are instead eyeballing their opponent, they're going to crash, just like any human. So you simulate that by using player proximity and the "erraticness" of the other opponents to model distraction and modulate the AI's awareness of dynamic obstacles and hazards.

Yes and no. Back in the day when I was writing Quake bots, there were things you could do to always beat the AI. The AI cant pick out patterns that are luring it into a trap. WE are a long LONG way from having AI that can think about the situation and make a decision on it's own...

"Player 4 has done this 4 times trying to lead me down that corridor, what the hell is he doing? I'm gonna sit and wait or try and circle around to see what is up."

That's not true. Look at the PROLOG language, or LISP. You don't need to program all possible decisions into an agent, you just need to give it the capacity to learn and assign various weights and things to the things it thinks are important so that it can quantify what the best decision is. With PROLOG specifically you can give an agent the ability to draw new conclusions based on things it already knows (which it then adds to its list of things that it knows).

Definitely. The job of the AI designer is to come up with a set of default behaviors and reactions which make the AI appear to be doing so.

You may not be able to make an AI figure out intent, but you can train them to recognize erratic motion - players in a pure deathmatch game don't often stop or double back quickly without an obvious reason, so something like that could trigger the bot to go into "cautious mode" and fire, say, a grenade to the entrance of that corridor then try to circle around. About 9

You might be surprised at how par AI has progressed.Some of the expert systems out there are remarkable.In the realm of games there are programs which can learn to play games by playing thousands or even millions of rounds against themselves learning each time what approaches work.

At the same time there are limitations but rarely the limitations that people would expect, right now AI's cannot do strategy.They can do knowledge, they can do creativity(in a sense) and they can certainly do brute force calculat

but calculating all possible moves x# in the future is not AI. Weighting each piece and giving certain situations as being better than others, then giving the ai the option of adjusting those weights and finding new situations and weighing them would be AI.

The AI should be able to record, "A pawn is worth less than a rook by X" Then it plays a game, sacrifices a pawn to a rook, sees the outcome (win/lose) and adjusts the worth accordingly. Of course this adjustment would have to go over all moves during

KnightCap and ExChess were two such engines which did. THey go even further, and learn what a specific piece is worth on specific squares. Normally this is implemented as Temporal-Difference-Learning which is exactly as you describe: Try it, then update weights.

A group of neurons can be connected together to form a calculator. But, you can't multiply 20 digit numbers in your head. You don't have access to the "hardware" layer of your brain. Why would a sufficiently advanced AI be any different?

As such you generally tend to base it against the opponent you are playing. An AI cannot tell if you are an aggressive or passive person, you're strategic abilities or understanding of game mechanics having never met you before playing the game.

I play online games against people I've never met before too. What magical ability do I have, that a computer could not?

The problem with AI's mimicking 'human' actions has nothing to do with a failure of logic or the ability to display randomness.

It has to do with the fact that we've never really understood why we do certain things, because we hold the false notion that for the most part our actions are driven by logic rather than the reality that our logic is driven by our actions. Thus, when someone happens that doesn't fit our model, we ascribe it to randomness despite the fact that it could probably be shown that the sam

We tend to behave illogically only in response to specific stimuli (fear, anger, hunger, lust) or when our system is under strain (fatigue, extreme hunger or thirst, neurological stress), nearly all of which can be simulated effectively enough for a game simulation.

So now we examine the character of our illogical behavior - we prioritize actions inappropriately, mistake one input for another of a similar kind, suffer from reduced reflexes or recognitio

In a possibly-not-so-futuristic World at War where AI bots, in "Terminator" fashion, have essentially the same decision making processes as us, a world where we SHOULD be on a level playing field with our enemy, humans will always have the upper hand.

Not even that. Such an AI could observe only the outward symptoms of our (il)logic as expressed by our behavior. The best it could do would be to mimic behavior and develop logic built on that foundations. It would have no insight into the motivations, reasoning, and logic that leads to do that behavior ourselves.

Everything Peter does looks impressive while he stands by it. He's like a lesser powered Steve Jobs. However, unlike Steve, Peter's glamour effect only lasts till the product is released. Should Milo ever actually hit the market, it will immediately revert to a simulation of an autistic Eliza with Turrets syndrome and a tendency to stare at crotch rather than your face.

Peter will then appear and indicate that he knew Milo I was going to be this bad, that's why for the past TWO decades, he's been working on Milo II, which will suppose to do everything he actually promised in Milo I and include a loveable dog character for you to interact with as well.