Some time ago, i was toying around a new game, and it turned out that making the AI was a real challenge. I tried some simple stuff but the results were pretty poor and i feel there is much potential on developping AI. So i thought, it would be even more interesting to focus only on the AI and make a contest out of it!

...As you can see, it is very vague now. On the other hand, i can't say more so that the contest could take place "correctly", unvailing the details to everyone at the same time. The only thing i can say now is that it has much potential and can be implemented with a variety of techniques. ...And it's not that easy. ...And the AI's will compete against each other. ...It'll be fun, you'll see!

If there is enough people wanting to take part of such a contest, then i'll make a small framework to build the AI on the top of it and to visualize how it performs or "quick-testing" it. Then start dates and deadlines will be set and details unveiled.

It's exactly what you meant, what i'll provide is a simple game skeleton where AI players compete against each other. However, i can't tell the sort of game now. No evolved libs will be used, just plain standart java2D, so that there is no need to install libs nor compatibility issues. Then you'll use some interfaces acting as a layer between the game and the AI.

They even have leagues setup and demo bots that you can test your code against. It seems silly to invent a new framework, and the robocode stuff is already written in Java!

The hard part of any competition like this is the judging, how do you decide a winner? It's easy if your AI is testing skills like fighting, the last bot standing wins, but how do you test other routines? If the test is path finding then things like A* have already been developed and are widely seen as being the best they can be for given situations.

sounds like fun, but robot wars has been around since the days of the appleII. maybe we should shoot for something that looks at other ai application dynamics.

how about:

- each player gets to control n number of bots

- winner determined by team that destroys the other team's base first.

- optional twist could be adding landscape factors where bots could get trapped, slowed, or damaged if not careful.

- another optional twist might be to allow the agents to modify the environment (blow up barriers, create barriers kinda thing.)

- bots should be resource bound. no bot should be able to fire an infinite amount of rounds, etc...

- scoring system would take into number of bots you had left when you won. it should also be a time-limited event with a slight penalty for draws to prevent sand-bagging strategies.

- implementation of the framework might best be done by providing a bot battle ground server where your bot proxy is controlled by issuing bot commands. all queued up bot commands that are received are played out based on timestamp for a given game unit timeslice. controlling clients are free to send their bots commands as often as they like, but each command type should have an effective window of completion. additional commands received while performing an uncomplete task for a particular resource system of the bot is ignored. for example if client A sends 100 fire commands in 5 ms and we deem firing to take 10ms, then the bot will only fire once. but, you could send a fire command and a move command 1 ms apart and have them execute immediately because they involve two separate resource systems of the bot (locomotion system and weapons system.)

this has progressed from something managable to something that has to be done by a trillion developers. In other words, it doesn't need to be that complex.

I have created a nice little AISystem that sits on top of jME. All it does is fire events for each entity (or global events), set agent actions (for each entity), send messages to other entities (or globally too) and obtain entitites from collisions. I think this is the ideal framework to get started on. Its managable and every extendable. Its only meant to provide a consistant frameowork and not meant to be a complete AI solution to all AI problems. But I did write it, so I might have an advantage...Any other ideas?

Ok, idea. Has anyone played Worms? You know, the little worms that kill other worm using different gun methods? why not that, but with AI. It could only be used as starters. No pathfinding is completely necessary, just testing routines and effectiveness...last surviving wins What ya think?

I thought this was supposed to be a poll on having an AI contest. It seems now that you're having to write your own framework to run the contest before you've even started to design/build the agents that are to be compared.

Thats why I suggested Robocode, the framework is already written, people can start writing the agents straight away, it's a level playing field since no-one here (I presume) wrote Robocode.

Is this a contest for developing AI agents or see who can write the best AI framework environment?

Keep it simple and more people will enter. If you want to do more than Robocode provides, extend Robocode, the source is available, why start from scratch?

Andy.

PS: I'm in no way affiliated with Robocode. I just don't see the benefit in writing a new framework for this contest, and then debugging that framework during a contest.

yeah, this discussion is going in every direction...To state it again: if enough people are motivated i can work and provide a simple framework and the contest would be to write the best agent AIs....but i don't feel enough people will take part.

yeah, this discussion is going in every direction...To state it again: if enough people are motivated i can work and provide a simple framework and the contest would be to write the best agent AIs....but i don't feel enough people will take part.

i think you are right. seems, we are all more excited about writing or proposing such a framework than providing the ai for it! maybe we should have an ai plug-in framework writing contest? or maybe we need an ai agent contest that has cash prizes? on a related frontier, there are some pokerbot writers that are doing pretty well these days.

the thing is, if I start coding an AI into a framework that I dont feel secure enough with, i will most definetly do badly because im having to go around every limitation of that framework. But some people might feel comfortable with the framework, hence they get an advantage.

I suppose that will be true for which ever framework anyone sets out....

But I do think its a good idea. I feel that this contest should be game related, rather than "let see which snail gets more leaves!"

-- You probably don't care since I know very very little about AI but...

If you make it game related hows it going to be judged purely on the AI? Other factors will become prevalant.

Forcing your AIs to solve the same problem in the same environment is surely the only way to actually have a contest of AI programming.

I have a feeling plenty of people would take part if someone gave them a simple agreed upon framework in which to write their AI.

Whoever produces that framework is instantly ruled out of competing. If the framework is done correctly it shouldn't actually matter what the rendering technology is, infact it might be a good task for the developer of the framework to produce renderers in other technologies while the contest goes on. So, if we were to take MisterX's idea..

He writes a framework to support the AI contest. The initial renderer is written in Java 2D. This means that every competitor can get up and running without needing to understand yet another rendering library. Then while the competitors write their AI plugins MX can continue to write any renderer that the competitors have mentione (JME just for dp )

The other thing that seems pretty important is to keep the framework closed source. Otherwise unscrupulus types like me will just look at the source and work out a way to "fix" the contest or simply to cheat

There's a lot of work involved to prevent cheating.I suggest you look at MUD development and read all the tricks and tactics to prevent malicious code screwing things up - e.g. counting of how long code has been inside a loop in order to decide whether it's hung, etc.

Also look at JRobots and ask them for advice / help.

NB: JRobots has had massive problems keeping more than a tiny handful of people interested enough to code regularly. Such competitions keep dying, then being ieresurrected, then dieing again.

Part of the problem is that it's usually f****ing lots of work to get a bot that does reasonably well instead of being completely crapped on by the compeition . IMHO that is largely due to the design of JRobots etc - they have arenas that are too small for a start.

With my genetic programming hat on, I'd say that they tend to be binary success: you win or you die. You need a game where success is much more analogue, i.e. you will tend to rack up a lot of points before dieing; perhaps use a large arena and seed it with targets that can be shot at for points, or something like that - anything so that most entrants don't just get "you killed 0 enemies, for a score of 0".

PS if it goes ahead I'll do a GP plugin that plays it . A friend is just finishing his PhD and he's released the distributed evolutionary framework (using RMI, but nobody's perfect ) as open source, so I'm keen to put it into action

a gp approach would be interesting (given that bots are programs in and of themselves), but i have yet to see any research that correlates the complexity of the problem space to the degree of structure necessary within the resultant gp, thus providing some hint about the number of generations and the computational feasibility of the approach.

With my genetic programming hat on, I'd say that they tend to be binary success: you win or you die. You need a game where success is much more analogue, i.e. you will tend to rack up a lot of points before dieing; perhaps use a large arena and seed it with targets that can be shot at for points, or something like that - anything so that most entrants don't just get "you killed 0 enemies, for a score of 0".

good insight. every adaptive system requires good feedback in order to successfully evolve.

I've mentioned before that I think Robocode is the way forward for this contest, but it seems to be we're taking a different route, so I suggest the following.

Someone should come up with an interface, a set of commands that each bot can use to determine something about it's environment.

getArenaWidth()getArenaHeight()getNumberOfBots()etc

along with all the useful methods like

getVectorToClosestBot()moveLeft()moveRight()fire()etc

Then we the contestants can then write our own arena and bots, as long as the contract defined in the interface is maintained, I should be able to then place my bot into anyone elses arena and have it work straight away. No possibility of cheating here since I only know of the published api for determining information about my environment.

If someone who has plenty of time on their hands wants to write a fancy arena, which renders the action in real-time using LWJGL then so be it.

So in summary:-

1) Define the problem that we're trying to solve (last bot standing wins or find all carrots in a field in quickest time etc).2) Start to define an interface that we can use to support this contest so that each bot works on the same set of actions and information.3) Give us a dates for interface finalisation (we can all contribute ideas) and contest day!

I totally agree with that. If we have only certain methods that let us access the game itself in order to control our bots, then that makes the contest very fair as well. Then you know the contest is about making smart AI and not "who can make the most complicated framework to outline and handle hoardes of features"

So I was thinking some more about this contest, what we need is a few guidelines for submitting ideas on what aspects of AI the contest should be testing.

I came up with this list, that should be submitted with each idea.

1) A brief overview of the idea.2) Describe the idea.3) How can/does this apply to AI in games.4) How do you rank/rate competeing agents to determine the best.

Here is a simple example:-

Idea 11) An agent that plays minesweaper.2) AI that determines the next best move in a random world where the next move could be death.3) Chossing the next move in a world where death is possible and undesirable, chosing moves that don't result in death are better.4) Play 100 games of minesweaper, the agent that wins the most games, or dies the least is the winner.

First I would like to mention the APT tool which comes with 1.5. It allows for preprocessing of source code marked with annotations. Instead of some custom security manager at runtime, I would think this would possibly allow for code violations to be checked at compile time.

I am curious as to if you think there should be a cost to actual instructions. I'm new to this but it seems to be fair that each agent should get equal cpu time slices. I'm not sure how this is done accurately. Consider though that each function that the game arena calls on the agent has a predetermined cost or energy usage which is determined at compile time using the APT. This way a fine tuning database could be constructed which will have different cost values for different library calls. Math.trig calls might cost 4 points, + - / * might be worth 1 point, any paranthesis might be a point. A recursive call might be 10 points.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org