Posts: 21

I have no idea how many other developers use genetic programming to build AI into their games, but there will probably be at least someone who can nod along to my little rant and feel my pain.

Now I know that right off the bat, people would want to ask a bunch of questions about the game this AI is meant for, but that isn't really the point of this post. Just throwing that out there since I know it will/would happen otherwise.

Usually I have great success with genetic programming, but for the past 4 days I have struggled greatly with what I am working on. Terminology used in genetic programming seems to be all over the place, with it being rare to run into 2 people who use the exact same phrases to describe the same things, so bear with me if I describe things in ways you aren't used to, hehe. The first brain framework was 512 nodes (degrees of freedom, attributes, neurons, or pick your preferred term). This was a little on the high end for me, since I tend to find ways to boil things down to a smaller number. In my experience it's a bit easier to work with the smaller number even if in theory the higher number can lead to more complex AI.

Well it ran for about 30,000 generations, with many stops to make tweaks to help things progress, but I got nowhere. I couldn't even get the AIs to master the basics of game play to keep themselves alive, without fighting even being a factor. A generic way to describe this without going into specifics of the game would be that these AIs never figured out how to even grow enough food to keep their people alive, so learning to advance and fight never even started to happen.

Grudgingly I decided my starting framework must be flawed, so I highlighted a hundreds of lines of code and pressed delete. That always stings. I thought and thought, but couldn't come up with a way to boil down the framework into a smaller brain design, so I threw that idea out the window. The next framework would up being over 133,000 nodes per AI, using a pretty different approach. I'm happy to say that this worked and got the AIs growing food, collecting supplies, and generally keeping themselves alive after only a few thousand generations. Success, or so it would seem.

After tens of thousands of more generations, I realized no AI ever survived long who attacked other AIs. They were all going for a defensive approach and just seeing who could last for the longest amount of time (game ticks) until lack of resources killed off the slowest. This was a big problem because combat is supposed to be a huge part of the game, and I need the AI to rely on that heavily.

Now years back I mentioned somewhere that I had grown 2 AIs who eventually decided to work cooperatively, and messed up a game I was making at the time. I held, and still hold, that as one of the neatest and most surprising experiences I've had when programming, because it defied all odds and I still struggle to wrap my mind around how it happened. That was a very different situation though, so while it may seem similar to people who remember me telling that story, this time is not remarkable.

I lowered costs to attack and raised the destructive impact of attacking to the extremes, but still no luck. AIs would develop who fiercely attacked the others, but after a few generations they'd always go back to being peaceful. I let the system run non-stop until it had grown more than a half a million generations, but still no luck. Clearly I had done something wrong again.

My next realization was when I thought about how I have the test set up. There are 3 AIs who are in the test together, and at the end the one who dies first will be deleted, and the one who survived the longest is copied + mutated to serve as the new 3rd AI in the next round. In the past I rarely do tests with such low numbers of AI players, but then again I normally don't have such high numbers of nodes in each AI. Running just 3 AIs in a test with so many nodes is just as processor intensive as running dozens of AIs in a different game where each has a low number of nodes. I figured this 3 player approach was my problem.

As it stood, eventually an AI would be mutated to attack more, for the sake of perspective lets say that is you. So you're about to play for the first time, and you are facing 2 opponents who are nearly identical. One will be a past winner, and the other will be his slightly mutated clone from 2 games back. Even if you happen to be superior to them with your newly mutated strategy, it is essentially 2 verses 1. You may use attacks to beat the crap out of one of them, but the other is going to still perform very well in just saving up supplies and lasting for as long as possible. Most of the time all you are doing is fighting to be in 2nd place. The one who wins is cloned to be the new 3rd place, so each new round you're still battling 2 nearly identical opponents. Eventually luck will run out and you'll be in 3rd place, thereby being bred out of the gene pool. It's only a theory, but I think this is why even my decent fighters keep getting bred out after several generations.

I'm rerunning everything with 8 AI players to hopefully get better results. With only the very worst player being copied over, this should give AIs with new strategies at least a handful of turns to try to establish themselves before they find themselves at the bottom and are deleted. I really would prefer to run with larger numbers, but the tests cycle pretty slow. It would also be nice having the top 2 players breed instead of just the top 1, but with only 8 players I worry that would do more harm than good. It sucks enough needing to wait hours to see if a small change is having any effect, so I wouldn't like having to wait 3 days for the same thing. ROFL!

So yeah, this has all been frustrating the crap out of me lately, and it feels good to vent about it a bit. Welcome back AG.net forum, we missed having you around.

Some years a go me and some friends made an AI based on cellular automaton for the game Ant Wars in AI Challenge. We hit the exact same problem when we first came past the brain-dead state:

We introduced a decision tree and trained the different parts seperately e.g. exploring, collecting food, defending, and attacking, and an overall strategy AI to command which entities should apply which approach.

Now, it never really became very good, but it could do some fun battles at times - I'll try to dig it up from the somewhere box.

Do you think they know what happens to the one that gets deleted and they both mutually come to the same decision not to fight so hopefully none of them will be deleted. I know that's freaky, and kind of deep but that's me sometimes.

Ironcross, they are much too simple to be doing this on purpose. I guess "on purpose" doesn't really mean anything here, lol, but no they wouldn't be able to know that losing means being deleted to take that into consideration.

Chirpa, that sounds like it was a fun project! I've also taken that approach on other projects, where the AI is grown in pieces that are all intended for a specific role, then combined as needed in the final game. Its a great approach, but one I haven't been able to apply in this project just due to how it plays.

After my last post I did decide to switch things over so that the first place copies + mutates over the last place and the second to last place (of the 8 players). The second place copies + mutates over the 3rd from last place. And the remaining 3 middle-of-the-road players just carry over to the next round. After 20,000 generations I saw essentially no progress, so as I initially suspected, this was more harmful than helpful. I'm playing around with a few other arrangements. Basically I need to prevent the entire thing from being dragged down into a valley, but it's hard to do with so few able to breed at the end of each round.

This keeps making me think of the real world. Unless you want to add ideology as a war-starter, maybe diversifying scarcity / sources thereof would help, if you haven't thought of this already. Could be something ranging from each AI getting one resource in abundance that the others lack, or maybe random disasters can be big enough to force one to attack the other (would they be more likely to attack if one is disproportionately affected, or if they're both affected?). Using Castaways as an example, maybe there's a trade imbalance, such that one AI has lots of forests, another has lots of stone, and the other has lots of water and open land. You'd have a good trade triangle at work, until something ruins half the fish-and-farms AI's food supply. Since the other two wouldn't have invested so much in their own food, they'd either have to adapt with the stores they have, or raid the famine-stricken FFAI. If they build up their own food production, they don't need to trade anymore, and the FFAI loses access to building materials. The forest AI still has more sources of food than the stone AI, who are now buying everything they need from the foresters with rocks. If the Stone AI just takes the weakened Food AI's territory, it's all win for them, but the foresters get less bricks. Then a forest fire and overlogging wrecks the forests, so now the price of wood shoots up, and the Stone AI decides it's cheaper to raid than to buy.If the AI's can produce enough resources to support each other in crisis, they'll get around this problem. So constrain how successful they can be. If ever they get too close to equality, drop an asteroid on one, or have one strike oil / gold, or have a plague that can spread more easily the more trade there is.Hopefully there was something novel in there. At worst, you can have the simulation be run by a god who unequally intervenes when it detects too much cooperation. Like that guy who stands on the rooftop and waits for a group to walk by, then drops money in front of them to watch them fight over it.

You may try to introduce a tiny bit more randomness in the mutation part, it will force the new generations to try out something different than the what the strict selection/crossover forces them to. (Less deterministic)

I've got the number of players up to 12, even though that's making the cycles count up relatively slow. It's going to be a lot harder to see long-term effects at this pace, since it may now take an hour of processing time to see what I could have seen in 10 minutes before.

I totally agree about the limiting resources, and the way this game is set up everyone has the same potential. Attacking an opponent lowers their potential income, sort of like everyone being on an identical island but you can sneak over and set fire to their forests. You're trying to damage them as much as possible so they stave and die out before you do.

Once I got things up to 12 players, I did a side by side and noticed that I'm seeing a lot less progress-per-generation than I was even with 3. That's a little to be expected, but this seems more like I've slammed on the breaks. I believe that my changes to attacking damage (when I massively increased their effects yesterday) are not working with the larger number of AI players.

I slowed down the simulation so I could watch a few games, and indeed the attacks look less like "I burn down your forest and you burn down my mineral mine", and more like the outbreak of world war 3. Of the 12 players, as soon as the first handful reach attack capability they are nuking everyone. The damage is so great with maybe 9 of the 12 able to attack at this stage, that everyone is so crippled that's just where they die. I've rolled back the damages to how they were in the game's original design, and I'll see if this helps.

After running for a few more hours I'm still not having much luck. Getting them to fight it fine now, but what I get is a pattern that spirals into the dirt. If I start from scratch it's a slow crawl up to the point where they can keep themselves alive, and it turns into an endurance race. Eventually they're all so good they can survive forever (until the cycle times out), and things stay static for a while. Every now and then someone mutates and decides to attack, but he is almost always worse at the normal survival tactics and is outperformed back into extinction.

When enough time passes, I end up with someone who can still keep pace with the peace loving hippies, but can also throw out a few attacks, and this is when things make a very sudden turn. It becomes an arms race, leading to all of the AIs (I'm back running this as 12 by the way) finding the fastest way to send out a volley of attacks. Every player then limps out of the aftermath and the winner is just the one who can take 1 extra step further than his enemies, before falling into a bloody heap. I'm being dramatic with it, but they basically nuke each other until no one can do anything.

This continues until the skills to actually survive long-term are bred out, and I end up with a group of AIs who only have the skill to rush a quick set of attacks and then limp along the furthest. Any who mutate to send less attacks still get hit with the same apocalypse, so they don't really rise above the rest. This stage doesn't last as long as the first stage.

The final step ends up with all of the AIs being equally ill equipped to survive or attack. I've yet to catch this as it transitions so I can only try to make guesses about how it comes about. I get a group that resembles the early generations, where they basically just die off from "nature" early on and get nothing accomplished. I've left them at this stage for ages, assuming they'd repeat the cycle and evolve into master survivalists, but they don't seem to. I honestly can't figure out why this stage seems to stop and last forever. Forever of course is only a guess, but I have left them for as many generations as it usually takes to get survivalists, but there is no change.

I've never had this much trouble growing usable AIs before. I'm now like 95% sure it's either a framework issue for how the AI brains are structured, or it's a fundamental flaw in the game's design. It might simply be a mistake in the game itself, where no one would ever really use a combination of both attacking and defending as they played. Some games are unstable and lopsided like that, so it's possible this one is too. I think changing the overall game's design will be play Z though, since that's not far from saying the game is scrapped and its 'time to start fresh on a new game.

I wonder if its the environment, I mean if we all had everything in equal to our sides, like as much resources as we wanted, maybe humans would do the same and nuke the crap out of one another. I'm guessing these AI have no concept of politics and stuff, so, do they have to contend with nature? Like natural disasters that fall hard on one side, and maybe the other, seeing that, might capitalize on their downfall and launch attacks to finish them off.

Maybe those AI's after so many generations get aware that they're just test subjects? And they decide to be peaceful and civilised about this. A freaky possibility!

Deep in our hearts is a capacity of infinite good or infinite evil. What path we choose is up to us.In the end, we all have to meet the same fate-the oblivion of nonexistance, an event horizon from where not even thought can emerge...

Actually I had started out with a more utility based approach, but switched to genetic programming because I figured it would save me a lot of time. Haha, boy was I wrong! After wasting a lot of time fighting with that, I've rewritten a lot and am now back to more of a utility based idea. I'm still trying to grow some parts of the AI (glutton for punishment I suppose), but so far this is working out much better.

Perhaps I'm just losing my touch or something, lol. I've been growing AIs genetically for probably more than 10 years, and I've never fallen so flat on my face before. It's like suddenly tripping and falling flat on my face, but there was no stone, hole or ledge to explain why I fell. I'm left confused, bruised, and embarrassed. ROFL!

The following is a recreation of events. The name of the developer has been changed for his protection...

Arpone has a brief moment when the AI seems to be correctly growing, but after running for several hours it is clear that this is a false alarm. The problems are still not completely solved. Some results show up, but they are sluggish AIs, almost like growing a deadly race of Orcish warriors who happen to move very slowly because they have blocks of cement instead of feet.

Another day of coding and recoding passes, and the AIs grown get progressively worse as they are pushed to perform better. While going over the code for the hundredth time, Arpone notices something...

"Hmm", he says to himself. "When the AI is first being grown, starting resources are filled in so it has something to work with. It goes through a series of specific tests to make sure the AI at least makes logical sense. For example, it can't want to upgrade buildings it doesn't have yet. Once it passes the logic tests, everything is reset so it can have it's performance tested."

Arpone nods along as he goes over each line of code, and everything still seems in order.

"So after having it's performance tested on a fresh new map, it is given a score based on how many of the different resources it saved up, how many times it conducted research, and of course how many times it sent out attacks on test-dummy enemies."

Arpone frowns because at this stage basically every AI that is grown winds up with terrible scores that don't make the cut. The few that were grown the day before only passed because Arpone lowered the passing-grade to any AI that even broke even with a score of zero! Now that the bar is raised, even a tiny bit, millions of AIs are grown but not 1 passes this test! "How is this possible?!"

"Oh wait!" Arpone exclaims. "After passing the logic tests the AI is started over on a new map so it can have it's performance tested, but it's never given starting resources." As it turns out, 3 lines of code were not copied and pasted when everything else was, so every single AI had to start over on a new world using only the unused resources from the previous set of tests. In most cases this would be absolutely No starting resources, or sometimes just enough to build a few things and then starve quickly afterwards.

So Aprone, er I mean Arpone, adds in those 3 missing lines of code and suddenly AIs are growing and flying out at an impressive rate. In the time it's taken to type this, 32 new AIs have generated that can pass the performance test. It's looking like Arpone will not only get what he wanted to grow for the past frik'in week, but he'll also get to raise the difficulty of the performance test and still get results.

With such a stupid mistake wasting SOO much time, I had to protect the real developer from shame and ridicule. ROFL!

@Aprone I've shied away from them precisely because it doesn't give you much to work with if it fails. To me, they're just black boxes, so you have to hope that whatever changes you make fixes the issue.

Besides this experience where it hasn't worked out as well initially, how did they fair in your previous projects?

In the past they have been invaluable. As you said, they are basically black boxes, and at times that can be a drawback.

I suppose if the frustrating experience I've had this time around had also been happening to me over the years, I probably wouldn't be turning to genetic programming very much. This time has been an absolute nightmare at every turn, and it has wasted so much time it's ridiculous.

In the past I've used genetic programming to grow very competent "brains" for games that would have been super challenging, or flat-out impossible, for me to build by hand. Sometimes there are just too many things to try to factor in to plan out a "good" strategy to win.

In the past I've used genetic programming to adjust the rules for games to make sure everything was balanced. It's almost a reversed approach to how it's normally used. Instead of building a complex set of rules (a game) first, and then growing an AI that can use those rules to win, I started with the strategy and grew the game rules around it. Actually now that I try to put this into words, I think I should give at least another few paragraphs to explain it. This might be helpful to someone who hasn't thought to do this yet.

One time I did this jumps to mind, where you built buildings and collected resources. Sort of a generic explanation that fits with most games it seems, haha. So I listed the generic things I'd want to do in such a game, based on how I imagined it would look to someone playing my finished game. As a side tip, when building a game it's usually helpful to imagine yourself playing the finished game, and slowly filling in all of the missing pieces that your brain can gloss over in imagination-land. It helps give you a defined goal to aim for.

So anyway, I ended up with a list of actions. I send 2 of my workers to the forest and 2 workers to the gold mine. I start another worker being trained. I start construction on a farm. Workers return with handfuls of gold, then head back. A new worker is produced. I start another worker being trained. Workers return with lumber, then head back. You get the idea, it was more of generic instructions you may give to someone if you were helping them learn how to play a game. I didn't care about how much things cost, I only tried to jot down when various events would happen as I played (in the order they occurred).

So the growing part came in after I had this generic, but rather long list of how I imagine myself playing. The basic framework of the game leaves most things as variables to be tweaked, and it grows sets of game rules trying to get the closest match to how I hypothetically played. Eventually it pops out some that would work, and I can further tweak my choices or choose one to be happy with. The grown game would have worked out a price for training a new worker, price for building a farm, my starting resources, time it takes for workers to gather gold, to cut trees, how much those resources go up each time a worker carries some back, and so on and so forth. Using the short example I gave would result in tens of thousands of combinations that would work, but as that list gets sufficiently long there would only be a small range of prices that would work for building a farm (for example), that would still let a player stick to the build order I supplied. The more complete the list, the more it is able to narrow down real numbers. If it was able to ever find a perfect match (eh, not always going to happen), then I could sit down to play the finished game after and follow exactly those same steps and I'd always have just enough resources to pull it off in that order.

The long tedious process of figuring out all of those prices, times, and values is now grown in the same way we grow an AI when it would be a pain to figure out all of it's variables. I hope this makes some sense.

Before I went off on that rant, I had a 3rd example of how I've used genetic programming and now it's completely slipped my mind. The first 2 ways are by far the most common ways I use this technique, but I know there was at least a 3rd approach that I thought was worth mentioning when I started this post. Arg, it's going to drive me nuts, haha.

Well now that I think about it, I've used genetic programming pretty extensively for OCR and other vision systems. In my youth I used to make a lot of bots for mainstream games, and I was a huge fan of using vision systems to get around 99% of automated methods to identify my bot as a non-human player. For a vision based bot, it would take screen shots and literally "see" the game similar to how a sighted person does, and it would understand the game's state and play based on that info alone. There were many Many times when genetic programming helped me work out the best patterns in the pixel data. I didn't really think to add that category in here at first, since it's not going to help much in this particular community.

I still can't remember the 3rd example I was going to give, so I guess I'll give up now. If it comes to me later I'll post it.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." — Charles Babbage.

Oh, now I remember what that lesser-used third way was. It all depends on your definition of genetic programming, but at times I think best-fitting math equations to data would fall under genetic programming. What I mean by that is sort of what your graphing calculator does when you give it a list of points and it tries to plot those to a mathematical curve.

For complicated lists of data you've collected (like data-mined from websites or whatever), I've used genetic programming approaches to turn that data into easy to manage equations. You never seem to get data that truly fits with a nice math equation, so equations are grown that compete by scoring them on how close they come to all of the available data. I'm sure some people would not want to classify this as genetic programming, but I personally think it fits.

I'd love to hear about ways others have used genetic programming. It's one of those fields where someone can say how they used it and you get that "ah ha" moment! Something you hadn't thought of before, and suddenly you've got a whole new thing you can try down the road.

For those who want to learn more about AIs for gaming in particular, check out Funge, John, and Ian Millington. Artificial Intelligence for Games. 2nd ed., CRC Press, 2016. Safari Books Online, safaribooksonline.com/library/view/artificial-intelligence-for/9780123747310. Accessed 18 Sept, 2018.From the preface:

In this second edition of the book John joins Ian as a co-author. We have both had long careers in the world of game AI, but two memories that stand out from Ian’s career provide the philosophical underpinnings for the book.The first memory takes place in a dingy computer lab on the top floor of the computer science building at Birmingham University in the UK. Ian was halfway through the first year of his Artificial Intelligence degree, and he had only been in the department for a couple of weeks after transferring from a Mathematics major. Catching up on a semester of work was, unexpectedly, great fun, and a great bunch of fellow students was eager to help him learn about Expert Systems, Natural Language Processing, Philosophy of Mind, and the Prolog programming language.One of his fellow students had written a simple text-based adventure game in Prolog. Ian was not new to game programming—he was part of the 8-bit bedroom coding scene through his teenage years, and by this time had written more than ten games himself. But this simple game completely captivated him. It was the first time he’d seen a finite state machine (FSM) in action. There was an Ogre, who could be asleep, dozing, distracted, or angry. And you could control his emotions through hiding, playing a flute, or stealing his dinner.All thoughts of assignment deadlines were thrown to the wind, and a day later Ian had his own game in C written with this new technique. It was a mind-altering experience, taking him to an entirely new understanding of what was possible. The enemies he’d always coded were stuck following fixed paths or waited until the player came close before homing right in. In the FSM he saw the prospect of modeling complex emotional states, triggers, and behaviors. And he knew Game AI was what he wanted to do.Ian’s second memory is more than ten years later. Using some technology developed to simulate military tactics, he had founded a company called Mindlathe, dedicated to providing artificial intelligence middleware to games and other real-time applications. It was more than two years into development, and the company was well into the process of converting prototypes and legacy code into a robust AI engine. Ian was working on the steering system, producing a formation motion plug-in.On screen he had a team of eight robots wandering through a landscape of trees. Using techniques in this book, they stayed roughly in formation while avoiding collisions and taking the easiest route through more difficult terrain. The idea occurred to Ian to combine this with an existing demo they had of characters using safe tactical locations to hide in. With a few lines of code he had the formation locked to tactical locations. Rather than robots trying to stay in a V formation, they tried to stick to safe locations, moving forward only if they would otherwise get left behind. Immediately the result was striking: the robots dashed between cover points, moving one at a time, so the whole group made steady progress through the forest, but each individual stayed in cover as long as possible.The memory persists, not because of that idea, but because it was the fastest and most striking example of something we will see many times in this book: that incredibly realistic results can be gained from intelligently combining very simple algorithms.Both memories, along with our many years of experience have taught us that, with a good toolbox of simple AI techniques, you can build stunningly realistic game characters—characters with behaviors that would take far longer to code directly and would be far less flexible to changing needs and player tactics.This book is an outworking of our experience. It doesn’t tell you how to build a sophisticated AI from the ground up. It gives you a huge range of simple (and not so simple) AI techniques that can be endlessly combined, reused, and parameterized to generate almost any character behavior that you can conceive.This is the way we, and most of the developers we know, build game AI. Those who do it long-hand each time are a dying breed. As development budgets soar, as companies get more risk averse, and as technology development costs need to be spread over more titles, having a reliable toolkit of tried-and-tested techniques is the only sane choice.We hope you’ll find an inspiring palette of techniques in this book that will keep you in realistic characters for decades to come.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." — Charles Babbage.