board concept for indirect communication between
their modules, where each module publishes its current beliefs for the others to read. Next, they allow
for goals and plans generated by their planning and
reasoning modules to be inserted into their central
reactive planning system, to be pursued in parallel
with current goals and plans. Finally, they suggest a
method for altered behavior activation, so that modules can modify the preconditions for defined behaviors, allowing them to activate and deactivate behaviors based on the situation.

A simpler approach may be effective for at least
some parts of an RTS bot. Synnaeve and Bessière
(2011b) use a higher-level tactical command, such as
scout, hold position, flock, or fight, as one of the
inputs to their micromanagement controller. Similarly, Churchill and Buro (2012) use a hierarchical
structure for unit control, with an overall game commander — the module that knows about the high-level game state and makes strategic decisions — giving commands to a macro commander and a combat
commander, each of which give commands to their
subcommanders. Commanders further down the
hierarchy are increasingly focused on a particular
task, but have less information about the overall
game state, so therefore must rely on their parents to
make them act appropriately in the bigger picture.
This is relatively effective because the control of units
is more hierarchically arranged than other aspects of
an RTS. Such a system allows the low-level controllers to incorporate information from their parent
in the hierarchy, but they are unable to react and
coordinate with other low-level controllers directly
in order to perform cooperative actions (Synnaeve
and Bessière 2011b). Most papers on StarCraft AI skirt
this issue by focusing on one aspect of the AI only, as
can be seen in how this review paper is divided into
tactical and strategic decision making sections.

CooperationCooperation is an essential ability in many situa-tions, but RTS games present a particular complexenvironment in which the rules and overall goal arefixed, and there is a limited ability to communicatewith your cooperative partner(s). It would also bevery helpful in commercial games, as good coopera-tive players could be used for coaching or teamgames. In team games humans often team up to helpeach other with coordinated actions throughout thegame, like attacking and defending, even withoutactively communicating. Conversely AI players inmost RTS games (including StarCraft) will act seem-ingly independently of their teammates. A possiblebeginning direction for this research could be toexamine some techniques developed for opponentmodeling and reuse them for modeling an ally, thusgiving insight into how the player should act to coor-dinate with the ally. Alternatively, approaches toteamwork and coordination used in other domains,Despite collaboration being highlighted as a chal-lenging AI research problem in Buro (2003), to theauthors’ knowledge just one research publicationfocusing on collaborative behavior exists in thedomain of StarCraft (and RTS games in general). Mag-nusson and Balsasubramaniyan (2012) modified anexisting StarCraft bot to allow both communicationof the bot’s intentions and in-game human controlof the bot’s behavior. It was tested in a small experi-ment in which a player is allied with the bot, with orwithout the communication and control elements,against two other bots. The players rated the com-municating bots as more fun to play with than thenoncommunicating bots, and more experiencedplayers preferred to be able to control the bot whilenovice players preferred a noncontrollable bot. Muchmore research is required to investigate collaborationbetween humans and bots, as well as collaborationbetween bots only.

Standardized Evaluation
Despite games being a domain that is inherently suited to evaluating the effectiveness of the players and
measuring performance, it is difficult to make fair
comparisons between the results of most literature in
the StarCraft AI field.

Almost every paper has a different method for evaluating its results, and many of these experiments are
of poor quality. Evaluation is further complicated by
the diversity of applications, as many of the systems
developed are not suited to playing entire games of
StarCraft, but are suited to a specific subproblem.
Such a research community, made up of isolated
studies that are not mutually comparable, was recognized as problematic by Aha and Molineaux (2004).
Their Testbed for Integrating and Evaluating Learning Techniques (TIELT), which aimed to standardize
the learning environment for evaluation, attempted
to address the problem but unfortunately never
became very widely used.

Partial systems — those that are unable to play a
full game of StarCraft — are often evaluated using a
custom metric, which makes comparison between
such systems nearly impossible. A potential solution
for this would be to select a common set of parts that
could plug in to partial systems and allow them to
function as a complete system for testing. This may
be possible by compartmentalizing parts of an open-source AI used in a StarCraft AI competition, such as
UAlbertaBot (Churchill and Buro 2012), which is
designed to be modular, or using an add-on library
such as the BWAPI Standard Add-on Library
(BWSAL). 25 Alternatively, a set of common tests could
be made for partial systems to be run against. Such
tests could examine common subproblems of an AI
system, such as tactical decision making, planning,