Standard Track
Eight bots were used for the standard track: the five
preexisting bots (RandomBiased, POWorkerRush,
POLightRush, NaïveMCTS, PuppetSearch) and all
three competition entries (StrategyTactics, SCV,
BS3NaïveMCTS).

Each round robin tournament consisted of 8 ; 7 =
56 games (since we discarded self-play matches), and
we performed five full round-robin tournaments in
each of the eight maps, for a total of 5 ; 8 ; 56 = 2,
240 games.

Figure 4 shows the win ratios achieved by each of
the bots in this track, organized by type of map. The
bot that achieved the highest win ratio over all maps
was Strategy Tactics, with a win ratio of 0.682. The
second-best bot in this scenario was POLightRush,
one of the preexisting bots, with a win ratio of 0.672.
Figure 4 also shows the win ratio averaged over the
open and the hidden maps, and, as can be seen, the
win ratios of the different bots vary widely between
those two.

We can draw a few interesting observations from
these results. First, hard-coded bots perform very well
in maps that capture standard situations (for example, where the game starts with a base and some
workers, and the strategy hard-coded into the bots
applies). This can be seen by the great performance of
POWorkerRush and POLightRush in the open maps.
These hard-coded bots perform poorly, however, in
situations where their scripts do not apply, as can be
seen by their lower performance on the hidden
maps. This turned out particularly to be the case with
map7.

Conversely, game tree search excels precisely innonstandard situations, where the lack of appropri-ateness of the hard-coded bots can be exploited. Thiscan be seen by the performance of NaïveMCTS,BS3NaïveMCTS, and StrategyTactics on the hiddenmaps. However, some game tree search bots(NaïveMCTS and BS3NaïveMCTS) struggle in largermaps like map3 and map4, likely because they areunable to search deep enough for the size of the map.

StrategyTactics achieved the highest win ratio
overall in all maps, since it was robust enough to perform well in most maps (except map7), whereas other bots performed less consistently. Strategy Tactics’s
high win ratio was due to its mix of high-level and
low-level search.

SCV performed extremely well on standard maps
— and most especially on map3 and map4, likely due
to its being trained on these map types. However, it
struggled in nonstandard maps, which hurt its overall win ratio.

Interestingly, although Strategy Tactics is an integration of PuppetSearch and NaïveMCTS, SCV performed better than PuppetSearch on the open maps,
where high-level search is more important. This
result indicates that a StrategyTactics-like bot that
integrates SCV with NaïveMCTS could potentially
outperform this year’s competition bots. Additionally, SCV used very little computation time, and thus,
it would lend itself very well to such integration.

Another factor that plays an important role in the
performance of the bots is the size of the map. Figure
5 shows the win ratio of the different bots in this
track plotted as a function of map size. We can see
clearly that the performance of game tree search bots
(NaïveMCTS and BS3NaïveMCTS) decreases as the

Figure 4. Win Ratios of the Bots in the Standard Track, by Map Type.

0
1

0.8

0.6

0.4

0.2

All Maps Hidden Maps Open MapRandomBiasedPOWorkerRushPOLightRushNaiveMCTSPuppetSearchStrategyTacticsSCVBS3NaiveMCTS