Daily Archives: 30/07/2018

A Game Lab session at the recent Connections US wargaming conference examined the different methods of adjudicating the outcomes of arguments put forward in matrix games, with an eye to examining which methods might be preferred more than others in different circumstances.

The current guidance for assessing arguments in Matrix Games, contained in the MaGCK User Guide[1] is as follows:

Consensus. Some players prefer to reach agreement on the most likely outcome of the declared ACTION. This can work well in highly cooperative games but can be more difficult to implement in cases where actors have conﬂicting or opposing goals.

Umpired. Once PROs and CONs have been identified it might be left up to an umpire (or White Cell or Control group) to determine what happens. This has the advantage that the game outcomes can be aligned with research or doctrine, or nudged along a path that maximizes their educational value. It can also be useful when the players themselves have only limited knowledge of the game subject matter. However, having a third party determine success and failure can make the game seem rather scripted. If players may attribute the outcome of the game to obtuse or heavy-handed umpiring rather than to their own decisions and interaction with their fellow participants then much of its value may be lost.

Weighted Probabilities. This system of adjudication places a great deal of emphasis on the arguments put forward by the players, while introducing the element of chance. It is slightly more complicated than the previous systems. There is also risk that some professional audiences may recoil at the sight of dice—associating these more with children’s games than serious conﬂict simulation and gaming[2]. In this system 2 six-sided dice are used, with a score of 7 or more being required to succeed, with each strong and credible PRO argument counting as a +1 dice roll modifier, and each strong and credible CON counting as a -1, with especially high or low results representing more extreme outcomes. This also provides a “narrative bias” to the game as a score of 7 is actually a 58.3% chance of success and helps contribute to the evolving story.

Voting. The success and outcome of actions can be determined by a vote among participants. This can either be a straight majority vote, or the odds of an ACTION can be assessed by the distribution of votes. In the latter case, if 75% of participants think an action might succeed, then it has a 75% chance of success, and percentile dice or some other form of random number generation is used to determine this. Alternatively, players can each be asked to assess the chances of success, and these can be averaged. In analytical games, this provides potentially valuable insight into how participants rate the chances of a particular course of action. Voting systems do risk players metagaming, however—that is, voting not based on their honest assessment of the ACTION and its chances of success, but rather to aﬀect the probability assigned to it to advantage themselves within the game.

Mean/Median Probability. Alternatively, players or teams can each be asked to assess the chances of success, and these can be averaged. In analytical games, this provides potentially valuable insight into how participants rate the chances of a particular course of action. Although not included in MaGCK, there is an add-on set of estimative probability cards which can be used for this purpose. Following discussion, players or teams simply select the card from their hand that, in their view, best represents the probability of an ACTION’s success. These are then averaged (whether by calculating the mathematical mean of all cards played, or by using the value of the median card), and percentage dice are used to determine success or failure.

Discussion

The most popular method for assessment is that of using Weighted Probabilities as this reflects the early widespread use of matrix games in the hobby community. As a method, it is inherently understood by anyone with any familiarity with games and is relatively easy to explain for those without. It is fast and provides the adjudicator more licence in influencing the pace of the game to ensure it doesn’t get bogged down in excessive debate.

The main concern from those present in the Game Lab session in Connections US 2018, was that this, and all the alternatives above, failed to specifically address to one of the academic underpinnings of matrix games, that of crowdsourcing[3] the results.

Based on Surowiecki’s popular book, there are a number of elements required to form a “wise crowd”:Description

Criteria

Diversity of opinion

Each person should have private information even if it’s just an eccentric interpretation of the known facts.

Independence

People’s opinions aren’t determined by the opinions of those around them.

Decentralization

People are able to specialize and draw on local knowledge.

Aggregation

Some mechanism exists for turning private judgments into a collective decision.

As the MaGCK User Guide already covers crowdsourcing ideas from diverse participants[4], it was felt that the element of aggregation would be best served by the use of Estimative Probability cards[5]. These are available from the Game Crafter, but a set of print-and-play cards can be found here that have the same utility. It was generally felt that this was a more accurate method to leverage the work on crowdsourcing, as well as making the resulting probability more accessible and acceptable to the participants. The terms on the cards also reflect those commonly used in the intelligence community[6]. It also follows that the participants in the Estimative Probability method should be from all those present and not just be limited to the specific roles in the matrix game.

Oinas-Kukkonen has made a number of conjectures based on Surowiecki’s work[7], asserting that “too much communication can make the group as a whole less intelligent,” which we can address by the encouraging relatively quick moves, and the intention to avoid too much detailed debate following a player’s argument. This means the game can have a reasonable number of moves, requiring that the participants to have to live with the consequences of their actions made earlier in the game. I would suggest at least six moves, to allow for two cycles of Action-Reaction-Counter Action by the players. I would therefore recommend, at least for high level policy and analytical games, that the Estimative Probability method is used in future.

The procedure should be, following the arguments, to have all participants with their own deck of cards, and assess the probability of success independently and without discussion. They should then all reveal them simultaneously to the facilitator for adjudication. My preference would be to select the MODE or the MEDIAN of the results, rather than the MEAN as it is quicker and avoids lengthy arithmetic. Excessive outliers can then be discussed quickly.

It should be noted that, when using percentage dice to determine the final result, it is usually best to be consistent in expressing exactly what the dice roll is for (the success of the argument) and what score is needed with participants who are not gamers (e.g. “A 70% chance of success, which is a score on the dice between 1 and 70”). There is evidence that participants perceive “a 70% chance of success” differently to “a 30% chance of failure” despite their mathematical equivalence[8], so consistency in expression is advised.

[2] Edwards, Nicholas. 2014. What Considerations Exist in the Design of the Elements of Chance and Uncertainty in Wargames Utilised for Educational and Training Purposes? MA thesis, Department of War Studies. King’s College London.

[3] Surowiecki, James. 2004. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. Doubleday.