game-ai &laquo; WordPress.com Tag Feedhttp://en.wordpress.com/tag/game-ai/
Feed of posts on WordPress.com tagged "game-ai"Sun, 02 Aug 2015 20:28:29 +0000http://en.wordpress.com/tags/enhttps://activebytes.wordpress.com/2015/07/24/coding-the-tic-tac-toe/
Fri, 24 Jul 2015 13:25:51 +0000prateekpandey1992https://activebytes.wordpress.com/2015/07/24/coding-the-tic-tac-toe/Taking the discussion further from last post, lets see how we can use the concept of states to efficiently write board game AI. We’ll discuss concepts like Mini-Max and Alpha-Beta pruning. Also we’ll touch a little bit on hashing functions and how we can reduce our state graph significantly using the same. We’ll also talk about bit matrix board representation. The best candidate to explain all of the above methods will be Tic-Tac-Toe. Once we get our basics clear we can apply the same on more complex games like Go, Othello and Chess.

PS: I am not an expert in these topics but rather learning with you.

Board Representation: We’ll be representing the board as bit matrix as they are efficient and easy to operate on. We’ll be using two integers Wa and Wb to representing the bit matrix for each player. Each matrix will be an integer comprised of 9 bits, 1 for marked position and 0 for unmarked. To find out empty positions we can simply do 111111111 – (Wa + Wb).

Check winner: In a bit matrix checking for winner is very simple. We can simply have a list of all winning bit-matrices. To check winner simply and each winning-matrix with player matrix, and if it equals the winning matrix we have a winner.

Board hashing function must be simple, efficient and should generate unique hashes. Also a good hashing function may further help in pruning the state tree by accounting for similar board position generated by board rotation. All of the similar positions must be evaluated equally. How can we achieve these goals? The solution I have implemented uses transposition and mirroring to prune out similar board positions.

Well I got the general case solution working, and I got all of the code I’d already written for the entity behavior copied over to where it needs to be, and I built it all and it all runs more or less. So I guess now I’m on to debugging and improving it, creating more animations, and creating the alternate versions of this entity. It’s all a lot of work, but I knew it would be: The problem at this point is mostly just that it’s been real hot and I’ve been having trouble keeping my motivation going.

But, you know, my motivation has flagged plenty of times during this project, and it still keeps on chugging along. As long as I get a bit of work in every day, it will continue to move forward. Yes, I’d prefer to get in more than a little, but sometimes it’s going to be hard.

That’s just how it is.

So, this week will be mostly creating new animations to flesh out this entity, implementing them, and fidgeting with the code to make the behavior better and more naturalistic. Hopefully productivity will pick up, but even if it doesn’t progress will get made, just perhaps very slowly.

Another week on programming this entity’s behavior. A few days in and I was getting close to having it all working when I ran into the dread disease programmitis, bane of programmers everywhere. No I’m not talking about carpal tunnel syndrome. No, I’m not talking about getting a sore butt from sitting in a computer chair all day! I’m talking about the general use case.

Well, to make a long story short, it was pretty easy to create a generalized state machine AI behavior, but it was a lot more difficult to break all of the state code out into individual files, requiring me to separate all the variables used out into each of those classes rather than keeping them all in one central place in a cluttered but easily comprehensible way. It was also a lot of work taking the most general functionality, such as tests to see whether one entity can see or hear another and the code to navigate a path, and extracting that into an EntityTools utility class that I can use for all future entity behaviors.

In other words, all the traditional ways that programmers get sidetracked and waste a ton of time.

Is this time going to be a waste? Dunno! Pretty sure I could have at least done this in a better order, such as finishing getting the entity working, then extracting the general-use code out into an EntityTools class, then generalizing the state machine AI into a reusable behavior. That would have been the smart way to do it, probably. Oh well!

As things stand, if I can focus it should still come together pretty fast and be working within a matter of a day or two. That’s a big ‘if’, though, with the temperatures this week dancing up around the high 90’s and 100’s of degrees and me stuck in a tiny room with no air conditioning. Well, I’ll try to make steady progress, and maybe if I get lucky I’ll even eventually start making fast progress too.

I started working on all of the behavioral programming stuff, then I got completely sidetracked for a couple of days when I realized that I didn’t have any centralized document with all of my story content in plain language. Up until now I’d gotten by pretty much keeping all that stuff in my head, and occasionally writing down bits and pieces of it, more often than not in the form of stories which were more metaphorical than accurate to the reality of what’s supposed to be going on in the story. This, I realized, was making it difficult to append to the story and to plan out how I was going to tell it, because I had no centralized resource to refer back to to make sure I wasn’t contradicting myself. It took me several hours to write it all down and it ended up being more than 3,000 words, which is a pretty good sign I should have done it sooner. It’s still somewhat subject to revision, but revisions should be more along the lines of expanding and going into more detail about unclear concepts than changing the specifics (unless I come up with a really good idea, of course).

After that I went back to programming the behavior of these entities, and I found that both my production planning using Trello and the detailed story breakdown of the last week or so of work came in very handy, since I immediately found myself breaking down all of the behavioral specifics that had been confounding me into a set of relatively easy-to-manage behavioral states. Thus, rather than trying to think of the behavior of these enemy types as an impenetrable wall of if-then statements, I can parse this behavior much more readily as a set of simpler behaviors that switch to other simple behaviors based on the input. So, for instance, I can have a patrol state that does nothing except walk forward, and then have it switch to either an idle action state at random intervals, a turn around and patrol the other direction state if it hits a wall or the edge of its patrol radius, or a pursuit state if it sees the player. At that point, all I have to do is write the 5 or 6 lines of code that control each state and the entity should work.

I have the basic version almost up and running, but the code to handle when and how to attack is still a bit tricky since it deals with the specifics of positioning which vary from attack to attack. All of the movement stuff is pretty much there, though it will inevitably require some debugging, and I still need to generate the timing info for things which rely on a timer (attack recovery, patrol delays, alert time, etc.) I think I can finish up the basic behaviors tomorrow, at which point I go back and do all the prototype animations needed to fully animate those behaviors – after that, I go back in and add all the code in for another version of this enemy, such as the scout or rider variants, and then I make the animations for them – and so on, and so forth, until they’re all done.

Making future enemies should be quite a bit easier after this, since I think these will be by far the most complex of any non-boss enemy in the game. Even for cases as intricate as this, in the future I’ll have these guys as a template to work from, so I expect any future problems to be quite a bit more approachable.

Well, I’m back in the old country for the holidays, and boy do I feel like the smartest pickle in the jar for moving to California when it’s below zero here in Montreal (either scale).

There is one thing I miss about the old country: French Scrabble. I’ve spent an inordinate amount of time memorizing word lists and playing the game, and now it’s a pain to find a partner in good ol’ US of A.

So I decided to start studying English Scrabble, and I figured I could use some stats to prioritize my learning*. Quackle is a Scrabble solver that has a (well-hidden) command line interface that can play games against itself. The AI agent, called ‘speedy player’, uses heuristics rather than Monte Carlo simulations to determine which move to play; it’s very fast, however, has access to the full dictionary (TWL06, the North American tournament dictionary**) and plays a kick-ass game.

ZOEAE and UMIAQ are legit words, apparently

I let it run for 100,000 games and, with some Python glue (pandas mostly), compiled a list of the best words by various criteria: points per play, plays per game, points per game (= points per play * plays per game). I grouped words by form (ie. axe, axes, axed, etc. count as one root word) using a dictionary I found in Zyzzyva. The most useful word in Scrabble is…

Qi. Indeed, Qi is the only two letter word that contains a Q and one of a few dozen that contains a Q but not a U. Therefore, it is played very frequently, in 7 out of every 10 games. The next 5 are:

BE

RE – as in do, ré, mi…

ZA – short for pizza

ER – an expression of hesitation

The top 50 is in fact dominated by such two-letter words, which, while often not valuable in themselves, can be used to attach other, more useful words.

It is surprising how skewed the distribution of word values is. While knowing Qi leads to a whopping +22 points per game advantage, the 101th most common word, joe, gives an expected improvement of around a single point. Thus, the most useful words are by far the short 2 word letters, followed by three letter words containing high-paying letters, followed by a very long tail of infrequently used words.

Verbs are an interesting subset of words, because they have a lot of alternate forms, and thus by learning a verb’s root you add several to your vocabulary. Here’s the top 20, which includes quite a few surprises:

What about high-paying words (per play) that are used rather frequently (more than 1 out of every 1000 games)? We have:

ANTIQUE

DISRATE – to lower in rank

STEARIN – the solid portion of a fat

RETSINA – a resin-flavored Greek wine

TERTIAN – a recurrent fever

This list shows a consistent pattern – bingos created with low-paying letters (modulo the Q in antique) – letters which are highly likely to co-occur if you manage your leaves well.

This brings us to the important subject of the optimal leave, that is, which combination of letters on the rack leads to the best plays. We can repeat the same exercise and compute the average number of points on a turn given a certain rack, etc. The top 10 in terms of total points per game are:

AEINRST

EINORST

AEEINRT

AEGINRT

EEINRST

AEILNRT

AEEIRST

AEILRST

AEINORT

AENORST

The best leaves in terms of total points are dominated by the letters EITRNASL, forming LATRINES. If we look at the most paying leave per play, however, we find leaves dominated by the blank tile ? and high-paying letters – these leaves are infrequent, but when they can be placed for a bingo, with high-paying letters and assorted double and triples, they’re worth a ton.

Fuzzy Logic is a technique where instead of simply using a boolean to indicate the state of an object or AI character, the state is instead determined by a degree of membership by, generally by using a float value between 0 and 1. Doing this allows for a more granular assessment or reaction to the environment. A fuzzy system takes literal input data and fuzzifies it, comparing it to possible ranges of values in order to determine the group, or membership of the input.

Example of how fuzzy logic could be used to interpret temperatures. Rather than a boolean which says hot/cold, there is instead a temperature value with the state determined based on a range.

Demonstration of the difference between boolean logic and fuzzy logic.

References:XNA Fuzzy Logic code sample (There is a document talking about Fuzzy Logic included with the code sample) –
The document included with the code sample gives some practical examples for AI and implementation. The Sample Overview at the start of the document was the part I found most useful, as it gave practical examples of how Fuzzy Logic could be applied in games.

Fuzzy Logic for AI in Games &AI for Game Developers, chapter 10 (site has unfortunately disappeared so I am linking to archive.org) –
These documents explain more about how fuzzy logic is implemented conceptually, covering the key concepts of Membership, how much something belongs to a group, and Fuzzification, mapping literal values to degrees of membership.

For my tech demo I decided to implement the UI for a theoretical survival game using Unreal Engine 4. First, the reason I chose Unreal4 was that it has a built in UI system which could save me time over implementing my own (and I wanted to learn the Unreal UI system). Second, the reason I decided to create just the UI for a theoretical game was that I felt a survival game would be good for demonstrating the partial states that Fuzzy Logic is used for, but a full survival game would be massively out of scope. So instead I decided to create just the UI for it and manually manipulate the values to affect the state of the player character.

The way my tech demo works, there is a gauge for the player’s health and hunger, as well as the ambient temperature, wind direction, and wind speed. All of these values can be adjust using sliders within the UI, and by checking the box next to the hunger meter, the player will become hungrier over time. By adjusting these values, the player’s state, which can be seen in the bottom left corner, is affected. Their overall state is based on how hungry, cold, and healthy they are, with each one affecting them individually.

Until recently, research on videogameswas mainly focused on having more realistic games by improving graphics and sound. However, in recent years, hardware components have experienced exponential growth and players demand higher quality opponents controlled by better articial intelligences (AI). In this context AI plays an important role in the success or failure of a game and some major AI techniques have already been used in existing videogames (e.g., evolutionary computation and neural networks are beginning to be considered with moderate success).

In First Person Shooters (FPS) games, requiring higher quality opponents meansobtaining enemies exhibiting intelligent behavior; however, it is not easy to evaluate what a `human-like intelligence’ means for a bot in these games. Generally speaking, it is well known that the Turing Test is a procedure proposed by Alan Turing to corroborate the existence of intelligence in a machine (more information in [1] . The basic fundament is that a machine that behaves intelligently might be considered as intelligent in the human sense (this sounds to Terminator :-)

2k games

In this context, the “2k bot prize” is a competition that proposes an interesting adaptation of the Turing test in the context of the well-nown FPS game Unreal Tournament 2004 (UT2004), a multi-player online FPS game in which enemy bots are controlled by some kind of game AI.

UT2004

The 2k bot prize have been sponsored by 2K Australia since 2008, and the goal is to create a computer game bot (a.k.a. non-player character or NPC) which is indistinguishable from a human player. In other words, based on the reasoning (and fact) that computers are fast and accurate at playing games, they wonder if bots (i.e., non-human players) can play like a human player? As it is written in the web page of the Bot Prize 2014:

People like to play against opponents who are like themselves – opponents with personality, who can surprise, who sometimes make mistakes, yet don’t blindly make the same mistakes over and over. The BotPrize competition challenges programmers/researchers/hobbyists to create a bot for UT2004 (a first-person shooter) that can fool opponents into thinking it is another human player.

This competition was created and is usually organised by Associate Professor Philip Hingston, of Edith Cowan University, in Perth, Western Australia. The competition has been sponsored by 2K games since 2008, with up to $7000 prize money.

Dou you accept the challenge?Do you dare to try implementing a human-like bot for a FPS game? Try it…..Let’s try…

(Note: In the past, I worked on this issue and you can find a paper of mine in [2]. I will insist more on this issue in further posts)

The videogame industry has taken the lead role from the entertainment business, with a total consumer spent of $24.75 billion in 2011 [1] and estimated game revenues of $70.4 billion worldwide in 2013 (which represents a 6% year-on-year increase), according to Newzoo’s 2013 Global Games Market Report [2]. Moreover, the number of gamers was expected to surpass 1.2 billion by the end of that year. This situation has motivated the research applied to videogames, which has been acquiring notorietyduring the last years, involving many areas such as psychology and player satisfaction, marketing and gamication, computational intelligence, computer graphics, and even education and health (serious games). The quality and appealing of video-games used to rely on their graphical quality until the last decade, but now, their attractiveness falls on additional features such as the music, the player immersion into the game and interesting storylines. It is hard to evaluate how amusing a game is because this evaluation depends on each player, nevertheless there is a relationship between player satisfaction and fun [3]. Nowadays, interesting new challenges and goals are emerging within the area of video games, especially in the field of articial and computational intelligence in games [4].

As I have already mentioned in a previous post, Procedural Content Generation (PCG) [5, 6] refers to the algorithmic creation of game content, either with human intervention or without it, such as maps, levels, textures, characters, rules and quests, but excluding the behavior of non-playable characters (what is considered in the scientific community as generation of game AI and not content generated procedurally….difference that might be the issue to discuss in a future post ;-) and the game engine itself. The use of PCG has several advantages, including saving memory and disk space, improving human creativity and providing adaptivity to games. These benets are well known by the industry as demonstrated by the use of PCG techniques during the development of commercial games such as Borderlands saga with procedurally generated weapons and items, Skyrim (terrains and forests), and Minecraft or Terraria with procedurally generated worlds. We are thus in front of an exciting field that can controbute siginificantly to change the mechanism of producing(implementing videogames.

At the moment, there are three main goals [5] of PCG research that are currently not obtainable and it would require signicant further research effort:

Multi-level multi-content PCG (i.e. systems that are able to generate multiple types of quality content at multiple levels of granularity in a coherent fashion while taking game design constraints into consideration),

PCG-based game design(i.e. creating games where a PCG algorithm is an essential part of the game instead of being a design tool) and

PCG systems that could create complete gamesincluding the rules and game engine.

Are you ready to take up the challenges?

SERIGAMES Spain

Three exciting goals with no doubt! But also three really-very-difficut-to-handle challenges! Precisely, to deal with PCG and its main goals the SME SERIGAMES Spain S.L. have been recentñy created (well, to be honest, we are taking decisive steps in its creation)….I hope I can tell you more about this company in a near future……..

Note: Part of this post has been taken (and adjusted) from a paper of mine co-authorised with Carlos Cotta and Raúl Lara-Cabrera that have been submitted for publication in a reserach journal.

]]>https://doctorcep.wordpress.com/2014/04/28/ai-for-games-01/
Mon, 28 Apr 2014 11:05:02 +0000Doctor_CEPhttps://doctorcep.wordpress.com/2014/04/28/ai-for-games-01/here is some intro to Game AI that you may find useful. It has 4 parts.

Designing Artificial Intelligence for Games (Part 1)

]]>https://ironypolicy.wordpress.com/2014/02/20/possession-2-miscellaneous-devlog-1/
Thu, 20 Feb 2014 18:11:50 +0000Taylorhttps://ironypolicy.wordpress.com/2014/02/20/possession-2-miscellaneous-devlog-1/I’m working two jobs right now, and the past few weeks have been pretty busy for me. I have managed to continue working on Possession 2, just not as much as I’d like. Today’s post isn’t really focusing on any one aspect in particular, just highlighting some of the things I’ve been working on.

I’ve made some more AI improvements. Smarter creatures now try to avoid dangerous terrain, such as lava and fires, while dumber creatures will just walk straight through.

Dumb skeleton walks straight through the fire.Animated, click to enlarge.

Smarter caretaker paths around the fire.Animated, click to enlarge.

I’ve also made spellcasting AI improvements, so that creatures running away can use defensive spells like teleportation, and creatures can also use positive spells (like healing) on their allies.

Content-wise, I’ve also continued work. Here are shots from a few new special levels. First of all, the ruins of an underground city full of Lovecraftian monsters. It’s mostly finished, just needs a little more work on some of the creature powers.

Click to enlarge.

There’s also a few areas that are still very much in progress, a nature preserve (full of flammable grass and trees!) and the ruins of a demon city.

Affective Computing (AC), in a very simplified view, consists of applying the principles of Computer Science (CS) to the computation of feelings, emotions and or other affective phenomenons; from some perspective, AC combines fundamentals of Computer Science, Psychology and Cognitive Science and, among other issues, it analyzes ways to provide emotions to machines and the cevelopment of machines that can express feeling; yes, you are reading well! This (AC) can be considered a quite recent branch of CS as in the past CS has mainly focused on the way to provide intelligence to machines or, in other words, CS (in the form of Artifical Intelligence) has promoted the generation of computing methods that can “imitate human behavior” but the fact is that, until recently, emotions have been largely ignored by the Computer Science community.

Artificial Intelligence centered (and focuses nowadays) on the development of techniques that can help to create entities (e.g., robots or simulating software such as an artificial interlocutor) that are controlled via a decision-making mechanism that might be considered as human if it imitates or replicates the human behavior (thought as a decision-making procedure); however, what about feelings and emotions?Affective Computing might be considered the extension of Artificial Intelligence to the emotional universe(if one considers emotions instead intelligence). Of course, there is too much to say here! Can you think in the future of a machine expressing stress, or fear? The question is: is a machine able to express feelings, or more profoundly, just feel like a human? …..Hmmmm, why not? why not? I really think so, and I do because the first complicate task to attain this goal (i.e., computers that feel) is to provide a specific concept for emotions/feelings, and to have into account that not every person feels in the same way, and that the intensity of expressing emotions is different from one individual to any other; moreover, a really very complex task is the way to measure emotions and how this task can be handled.

As already mentioned I prefer to think that this is possible, but I do not want to create a debate here as there are a number of ethical issues that I do not want to discuss here (this might be the issue of a future post ;-); I just will say that there are many researchers that are investigating this line of work. For instance, in MIT Media Lab, you can have a look at the Affective Computing Group:

Emotion is fundamental to human experience, influencing cognition, perception, and everyday tasks such as learning, communication, and even rational decision-making. However, technologists have largely ignored emotion and created an often frustrating experience for people, in part because affect has been misunderstood and hard to measure. Our research develops new technologies and theories that advance basic understanding of affect and its role in human experience. We aim to restore a proper balance between emotion and cognition in the design of technologies for addressing human needs. (Affective Computing Group, accessed 10th november, 2013)

One of the main issues that Affective Computing deal with is the capacity of a machine to provide empathy in the sense that the machine might perceive the human emotions and acts according to them. As you can intuit, my dear reader, this is not an easy task as it is not easy to measure emotions even in human “creatures”. This is an obstacle and at the same time a motivation as it is not easy to recognize any kind of emotions.

What I have mentioned above might be extrapolated to the videogame area as a bot is an extension of the reality (a simulated machine basically); think for instance of an opponent (or an NPC mate) of you (e.g., in the context of a FPS game) showing different mood states according to the changes that the story or the scenario is suffering; I mean non-preprogrammed changes of moods or non-scripted emotional jumps; so, a bot might get angry when you shoot him or a mate of him, or might feel fear if the bot perceives certain degree of loneliness or senses that he (the bot) will be relentlessly annihilated by you in a few instants of the game. Bots that emotionally react to perceptions and are sensitive to the environment? Yes, they would be definitely welcome!! From my point of view, this would introduce a new perspective on videogames, and would increase drastically the sensation of reality with a(n supposed and assumed) increase of player satisfaction……but the truth is that one never knows! In any case, this is an issue that is attracting the interest of many researchers in the area of computational intelligence applied to videogames (I promise to say more and give details in the future).

In any case, what I have mentioned here is just a partial view of Affective Computing, and this is only the tip of the iceberg, I did not go into details of the origin of Affective Computing, neither in its multiple possibilities nor in its social applications: perhaps in the future I will consider these issues (surely) but not today……..In any case I (and surely you) can think of a number of serious applications of Affective Computing in the videogame field, I mean useful applications in Society (health, administration, social services,…etc).

]]>https://antoniojfernandezleiva.wordpress.com/2013/10/16/game-ai-some-classical-examples/
Wed, 16 Oct 2013 22:50:11 +0000afdez2013https://antoniojfernandezleiva.wordpress.com/2013/10/16/game-ai-some-classical-examples/The application of Artificial Intelligence (AI) techniques to games is not new and one can find many examples of it. I will not speak (yet) about the employment of advanced AI methods (e.g., bio-inspired optimization algorithms, neural networks, swarm intelligence, etc) to the development of modern games (surely I will do it in future posts) as first I want to introduce this issue (i.e., game AI) to the interested reader; so you can find here an interesting post about the top ten influential application of AI in games before 2008 (Click here).

Added multi-movement types functionality and portal connectivity. The multi-movement types is sort of self-explanatory. For the portal, do the A* as usual, but besides checking all geometrically connected polygons, also need to check the ones connected via portal. And if two polygons are connected via portal, the estimation of their mutual distance is 0. And then a list of polygons are returned from A*, in the case that portals were used in the path, then the list of polygons are not continuous, say, 1,2,3,7,8,9,12,13,14. Meaning 3 has a portal and we arrive at 7 directly, then we move to a portal on 9 and reach 12 directly. So I run funnel algorithm three times over each continuous segment, 1~3, then 4~6 then 12~14, then add their path together.

So by now I think I got a pretty good understanding of A* and navmesh how they work together. Could even do some beyond-basic tricks over it. The project still has lots of room to optimize. During the past three weeks, or maybe four, I lost count, I’ve been working on it from time to time during weekend, and some workday nights. So probably about 30 hours haven been spent, again lost count. I think now it’s a good time to refresh my mind a little bit and unplug from it for a while. For the potential improvements, I might switch back to this project in the future and work on them.

Haven’t determined what’s the next thing I wanna work on, I will use this week to come up with something interesting. Maybe it’s time to learn some WebGL, or maybe dig deeper into Procedural City Generation (I’ve done some work related to it before, when I was a student, in a class project, but since time was limited, I didn’t implement it very well), or maybe try some other game technology. There are so much I need to learn, ah…

Now it’s Monday morning, let wish this upcoming week is a good one! :)

This is the last task of course. Whether it is useful or not, good ending or not, but I’ve tried my best. This is me with myriad my lack. I only try to learn, for what I live, why I live , why I am happy to learn this all, what is the purpose of my life actually, anyone can explain it?

– HPA * algorithm is shown to have a faster speed performance than Algorithm A * usual in many cases a board game of FindYourLetters.

– The rising level of the Game FindYourLetters the resource usage (CPU and memory) are also getting bigger.

– The results of analysis from simulation algorithm result that HPA* 2 to 9 times faster that A* in many cases on board game of FindYourLetters. On the implementation carried 30 times testing, result that HPA* slightly more efficient in resources (CPU and memory) usage about 1-3 % than A*. Required a large map and more seere obstacles to increase accuracy of analysis HPA* efficiency.

The AI has quite a simple general structure. Apart from the special cases like restarts, goalkeeper AI etc. the main gameplay AI consists of three states: defensive AI, offensive AI and the AI when the agent is about to kick the ball. Whether a player is in the defensive or offensive state depends on the match situation (where the ball is, who has the ball) and the team tactics (agent’s position).

The defensive state is basically a decision “where should I stand to make the opponent’s offensive difficult”. There are a few options: try to take the ball from the opponent, try to block an opponent’s pass or shot or guard some area or opponent. The AI simply assigns scores to each action and picks the action with the highest score. (This, I suppose pretty standard AI technique, seems to be inspired by utility functions and is used throughout the Freekick 3 AI.)

The offensive state is even simpler than the defensive state: either the player tries to fetch the ball or tries to place himself in the best possible supporting position, which should be somewhere that can be passed to, far away from opposing players, and a good shot position. (The AI builds a kind of an influence map that is also affected by some soccer-specific things such as the offside rule.)

Probably the most important decision the AI has to make is what to do when the agent has the ball. Again, like with the defensive state, the AI has a few different possible actions, and it assigns scores to all of them and simply picks the action with the highest score.

The possible actions are Pass, Shoot, Long Pass, Clear, Tackle and Dribble. I’ll start with the Pass action.

When deciding the pass target, the AI loops through all of the friendly players and keeps track of the best option. For each player that’s not too far or too close, the AI considers either passing directly to the player or trying a through-pass. The base score for the pass is highest for players nearer the enemy goal, and then decreased for each opponent player that is seen as a possible interceptor.

The Shoot action score is basically a function of distance to the opponent’s goal and the distance from the opponent’s players (especially the goal keeper) to the possible shot trajectory.

The Long Pass action is actually a composite of Shoot and Pass – it checks for the Shoot and Pass action scores of the friendly players and chooses to make a long pass (or cross) to the player with the highest score. As with other actions, the score is multiplied by a team tactics coefficient, allowing the coach to influence the team’s playing style.

Clear and Tackle are basically emergency brakes that the AI can pull in a situation where the ball needs to be kicked away from the own goal or the opposing players.

With Dribble, the AI creates a few possible rays at regular angle intervals around the agent as possible dribble vectors and assigns scores to each of them. Similar to shooting, the score is higher near the opponent goal, but decreased by opponent presence.

So, in the end, the AI is composed by several rather simple techniques. It’s all heuristics without any algorithms providing optimal solutions (if any can even be used in soccer AI). It uses some simple seeking behaviors (arrive at a position, chase the ball). There’s one simple state machine, with state transitions decided mostly by the ball position. The top level AI is built around simple if-then-else-statements (I suppose you could call them decision trees). Lots of the decision making uses some sort of fuzzy logic, even though it’s not really structured like that (instead the code itself is fuzzy). Still, the AI manages to seem smart in most cases, it plays rather well and presents a challenge for the human player (for me, at least).

There are still quite a few standard AI techniques that I haven’t implemented which might make sense for Freekick 3. For example it might be interesting to experiment with adding some kind of learning ability to the AI, which should be possible using reinforcement learning, or adding a more complicated planning process with the use of a decision network. A useful first step would be to extract all the used generic concepts like decision trees and fuzzy logic to their own code pieces, which would enable experimenting with things like learning a decision tree.

My conclusion is that there are lots of different game AI techniques, many of which are quite simple, and the key to creating a fun AI is to find out which techniques to use for which problem and how to combine the techniques. The techniques are often intuitive but can be also described mathematically, so that when reading up on game AI, you may, depending on the material, get an impression that game AI is either very non-scientific or very mathematical (and therefore difficult), while I think it can be either, depending on how you look at it.

For the interested, the ~500 lines of Freekick 3 AI that make up the core can be found at GitHub.

Abstract: The GPUs (Graphics Processing Units) have evolved into extremely powerful and flexible processors, allowing its usage for processing different data. This advantage can be used in game development to optimize the game loop. Most GPGPU works deals only with some steps of the game loop, allowing to the CPU to process most of the game logic. This work differ from the traditional approach, by presenting and implementing practically the entire game loop inside the GPU. This is a big breakthrough on game development, since the CPUs are evolving to multi-core, and future games will need similar parallelism as the GPUs programs.

Abstract: Learn how to develop faster and better games with the use of GPGPU thought the use of Game GPU tricks. Normally, games process most of its tasks in the CPU, using the GPU only for graphics processing. This session shows some techniques on how to better use the GPGPU power to process all the game logic, achieving speedups when compared to CPU, and traditional GPU models. This session also shows some examples of this technique in practice.

]]>https://datko.net/2012/05/06/ai-sportswriters-brewing-coffee-1890s-style/
Sun, 06 May 2012 21:23:16 +0000Joshhttps://datko.net/2012/05/06/ai-sportswriters-brewing-coffee-1890s-style/This months WIRED magazine, which I insist on receiving by mail, had some great articles. I also read books made out of paper, so if you are Generation Y or later you may just want to go to WIRED website and read these articles for free.

Anyway, onto my WIRED roundup with: Fewer Voters, Better Elections by Joshua Darvis. Scrap the one vote per person system and run it like clinical trials where 100,000 people are randomly selected to vote. This is certainly one way to implement voting reform… Personally, I think it would be interesting to have a different representative system. Currently, congressional representatives in the U.S. are elected based on a geographical area, with the idea being that particular elected official accurately represents his or her

I don’t consider myself a hoarder, but I do keep my WIREDs. This is not my collection, but maybe one day…(Photo credit from flickr: outtacontext)

constituents based on location. But what about if we had representatives based on profession? I feel that I agree with more software engineers than I do my neighbors. Passing thought experiments for sure as I doubt any reform is up-and-coming in the voting arena.

In a short product review, apparently the Bodum Bistro 11001 Coffeemaker is the thing to get these days. Me, I’ve switched to a french press. Mainly out of necessity since in my current living arrangement, I do not have a counter. Essentially coffee makers are expensive heating elements. They look nice, but basically they drip water and then keep it hot. So, $250 seems a bit steep for me when there are cheaper ways to heat water. I also use a burr grinder and keep my coffee in a mason jar. I’m suddenly realizing I’m living in the 1890s, or in Portland.

Lastly, Steven Levy, of Hackers fame, writes of rise of AI in sports reporting in the Rise of the Robot Reporter. As I learned from my Game AI class last quarter, there is a lot of active research in AI generated narrative (stories). In the game world, this allows games like Skyrim to have unlimited quests and to be never-ending (story! sorry… couldn’t resist. Where are the actors in that movie now?!). The idea with the robo-reporter is that for sports stories, which are very data-centric, the AI would generate the post-game article. Once the AI is aware of the rules of the game, it would then know what plays were pivotal and be able to detect the turning point of the game. The story would then be written prior to the teams shaking hands.

Narrative generation is not yet human-quality, so there is no near-term fear that robots will take over sports journalist jobs. However it provides a great starting point for writing the article. But what I find more interesting is its applicability into video games. Imagine an online game, I’m thinking a MMORPG type, where battles won and lost are documented by in-game newspapers, written by AIs. Did you just make the leader board? You can read a detailed article about it in the Daily Paper. This could even be provided as paid downloadable content. Everybody has a newspaper from the day they were born, but how about a copy of paper from Skyrim on that day?

Now, if I could only find a way to get my hands on the new german WIRED. Maybe when I go to Germany in June I’ll have to hunt down a copy…

]]>https://datko.net/2012/03/08/now-ais-have-all-the-fun-they-play-and-create-the-game/
Fri, 09 Mar 2012 00:06:59 +0000Joshhttps://datko.net/2012/03/08/now-ais-have-all-the-fun-they-play-and-create-the-game/A new AI system, called Angelina is extending procedural content generation to create an entire video game. As part of Michael Cook’s PhD, from Imperial College of London, he developed Angelina, which randomly creates the level design, the enemies, the enemy movements and combat tactics, and the power-ups.

Ok, not everything is generated right now. The music and graphics are human-made, but procedural generated techniques for generating music and graphics do exist. As the New Scientist article hints, what’s to stop an artist from using Angelina for pushing out a new game every 12 hours and posting it to the App Store… A game generated from

Video Games beget Video Games via Wikipedia

Angelina is available online to play. It’s pretty impressive. It’s no Half-Life, but remember this was automatically generated! Now, if there was a video game that created video games, we’d have a practical example of a self-reproducing machine besides Conway’s Game of Life.

And then there is this video, by Quantic Dream that primarily shows the improvements in near-human CG animation. It’s stunning visually, but it’s also a gripping vignette. Showing the singularity moment when AIs become self-aware. When this happens, I think they will make more than scrolling 8-bit games!

Lastly, I found an interesting paper on Automatic Quest Generation. In this paper, Jonathon Doran and Ian Parberry survey 3000 quests from various online games like World of Warcraft and categorize the type of quest. They then go own to create a set of rules (a grammar for those CS-types reading) to produce the quest procedurally. Those quests can get boring fast, and I’m not surprised to find out that most have the form:

At some point while playing WoW (a few years ago…), I stopped reading the actual quest description (i.e. the story) just to see the lists of tasks I had to accomplish. It was at that point that I also stopped finding the game fun and stopped playing. So if designers focus on a good main story, they can offload small side quests to the AI. After reading this paper and watching associated video, I think I’m going to incorporate a subset of their grammar into my game project, and combine it with some player modeling. I can’t give away too much to my potential test subjects, after all, there will be cake.