Security Games: Using AI to
Address Real-World Problems
In recent years, the field of security games, a subfield
of AI, has drawn increasing attention from outside
the AI community (Tambe 2011). In particular, game
theory–based decision aids have been successfully
deployed to protect critical infrastructure such as airports and ports (for example, Pita et al. [2008[), making real-world impacts and resulting in fundamental
changes to security operations for various organizations. Security problems continue to evolve worldwide, creating new research challenges and practical
applications for security games. Focusing on wildlife
protection specifically, poaching represents the second largest threat to biodiversity after habitat
destruction. This led to the development of green
security games, a subfield of security games focused
on protecting forests (Johnson, Fang, and Tambe
2012), fisheries (Haskell et al. 2014) and wildlife
(Fang, Stone, and Tambe 2015; Kar et al. 2016).
Although park rangers conduct patrols to combat
poaching, security resources are often limited in vast
conservation areas. Manually generating patrol
schedules can require considerable effort from
wildlife security staff, and such manual plans can be
predictable, allowing poachers to exploit patrol
schedules. Our security game–based solutions combine different AI subfields — including game theory,
optimization, and machine learning — to help
rangers automatically generate randomized patrol
strategies that account for models of poachers’
behaviors.

As a subfield of computational game theory, security games (Tambe 2011) model the strategic interaction between two players: a defender and an adversary. Security games take into account: ( 1) differences
in the importance of targets; ( 2) the responses of
attacker (for example, poacher) behavior to the security posture; and ( 3) potential uncertainty over the
types, capabilities, knowledge, and priorities of
attackers. This problem can be cast as a game. As a
brief example, a security game in the wildlife domain
involves the following: the ranger allocates security
resources (that is, ranger patrol teams) to protect a set
of critical targets of varying importance (figure 1).

Higher value targets may be portions of a protectedarea with higher biodiversity, larger numbers of ani-mals, and/or protected species. The ranger deploys amixed strategy, which optimizes over all possibleconfigurations of allocating patrols across these tar-gets, and is represented as a vector of probabilities ofcovering any given target. The poacher conducts sur-veillance on the ranger’s strategy before selecting atarget to attack, with the goal of maximizing payofffor any given defender strategy. The players’ actionslead to different payoff values, and the defender’sperformance is evaluated by her or his expected util-ity. The defender’s goal is to find the optimal strate-gy so as to maximize expected utility, knowing she orhe faces an adaptive adversary who will respond toany deployed strategy.

Prior Work: Teaching with
Projects, Problems, and Scaffolds
The potential applications of this work to various
contexts have created the need to introduce the AI
concepts underlying security games to individuals
with limited AI backgrounds. These include not only
students, but also audiences outside of traditional
classroom settings. Given recent advances in green
security games in particular, helping decision makers
and those who may consider using AI-based decision
aids in the field to understand the underlying theoretical framework can aid in fostering adoption of
these emerging technologies.

Teaching security games and related concepts such
as probability, optimization, and agent-based modeling to those with limited AI backgrounds can be challenging. In traditional classroom settings, AI concepts have been made accessible to undergraduate
students who enter with limited AI backgrounds
(Stern and Sterling 1996, Parsons and Sklar 2004,
Wollowski 2014). One method that has been effective in teaching AI in classrooms is the use of games.
For instance, games have been used to teach robotics,
(Wong, Zink, and Koenig 2010), Pac-Man has been
used as a tool to teach various AI concepts (DeNero
and Klein 2010), and in a game called CyberCIEGE,
players build a virtual world while learning about AI
issues involved in cyber security (Cone et al. 2007).
However, no prior work describes effective methods
for teaching AI to audiences beyond the classroom.
Similarly, little evidence speaks to approaches for
framing such games to teach and foster interest in AI.

We explored the possibility of using real-world
problems to frame AI instruction. This approach is
similar to project-based learning, an educational
framework that aims to increase motivation for learning by engaging students in investigation (
Blumen-field et al. 1991). Specifically, project-based learning
involves presenting a problem that guides activities,
and such activities culminate in a final product to
answer the initial question. A meta-analysis of project-based learning studies conducted in real-world
classrooms found that such an approach results in
positive effect on application of general science
knowledge, and although no immediate main effect
on declarative knowledge (of underlying concepts,
facts) was found, this increased over time (Dochy et
al. 2003). Similar approaches have been applied to
engineering curricula at the college level at several
higher education institutions, and although no systematic evaluation results could be identified, qualitative feedback from students indicated that they
evaluated the approach positively (Mills and Treagust
2003). Particularly relevant to AI, Gini et al. (1997)
used a variety of robotics projects to teach robotics
and other AI concepts at the college level; however,