complex. Infact, the rate at which the complexity is growing in these simulations often outpaces gainsin computing power. While this level of complexity is necessary for many applications,we contend that a lower resolution approach to entity-level simulation is also necessary,providing a more robust modeling, simulation and analysis toolkit.Our rationale is thatanalysis for concept exploration and studies often involves examining a very largeparameter space. Time constraints frequently limit the number of high-resolutionsimulation runs that can be completed and in turn the number of parameters andparameter settings that can be investigated. Low resolution screening tools can helpidentify parameters and parameter settings of interest.Dynamic Allocation of Fires andSensors (DAFS), a low-resolution, constructive entity-level simulation frameworkdesigned for combat analysis at the brigade level and below, is one such tool. Because ofits low-resolution approach, DAFS runs fast and is relatively easy to set

up. In addition,DAFS isdesigned to use selected results of

high-resolution models

as input, enabling theanalyst to trace DAFS inputs back to accepted data and models.

I. INTRODUCTION

Today’s entity-level force-on-force combat simulations arequite complex.Requirements for analyses of the complex components of the Army’s Future Force aredriving these simulations to ever-increasing levels of complexity.For example,CASTFOREM (Combined Arms and Support Task Force Evaluation Model) has beenthe Army’s standard constructive (non-human-in-the-loop) simulation for brigade andbelow analyses since the mid-1980’s. As originally designed, CASTFOREM wasintended to model no more than 60 minutes of combat[Mackey 2001], but by 2000,modeled combat time had grown to 33 hours. This increased even further to 44 hours by

a three-fold increase in the number of entities represented (whichresultsinan eleven-fold increase in the number of possibleshooter-target

pairs

to beadjudicated); a six-fold increase in the number of decisiontables, which control theactions of battlefield entities; a 52-fold increase in the amount of output data; and a 24-fold increase in the amount of virtual memory required. All of these factors, coupledwith algorithmicadditionsto CASTFOREM (e.g., fusion algorithms, urban targetacquisition algorithms) contribute to the overall complexity of the model. Usingcomputer run time as a reasonable surrogate for overall model complexity, we see thatthe time to complete one replication increased from 4.5 hours

This is not intended to be a criticism of CASTFOREM or other high resolutionmodels; indeed, this dramatic increase in model and scenario complexity resulted becausethere was(and continues to be)

demand for greater and greater fidelity and detail in ouranalytical simulations. Modeling many hours ofjoint, network enabled operations in asystem of systems frameworkis orders of magnitude more complex than modeling anhour of combined arms operations. In response to the question regarding how the analystought to use exponential gains in computing power predicted by Moore’s Law, Lucas[2003] suggests two alternatives:

examine more cases with the same model, or examine asimilar number of cases with increased model resolution. Figure 1 suggests the existenceof a third alternative: examinefewer

cases than before withgreatly

increased modelresolution. The fact that the 30-fold increase in model complexity far outpaced the three-fold increase in computing power–

as measured by processor speed–

makes it evidentthat this was the alternative chosen.

TRADOC Analysis Center (TRAC) leaders did not choose this alternativeblindly. The fact that such a choice was made highlights one of many tradeoffs faced byanalysts and the decision-makers they support–

the tradeoff between simulationcomplexity and the number of cases (and replications for each case) that can beexamined. When time is constrained, one is gained at the expense of the other.

The Precision Munitions Mix Analysis (PMMA), a current TRADOC study, is agood example of the effects of this tradeoff. Restricted to scenarios of sufficientcomplexity to effectively portray the depth and breadth of factors necessary forcomparing the effects of various mixes of Army and Joint precision munitions,thestudyleaderswerelimited to fewer high-resolution simulation runs (in CASTFOREM) thandesired. To their credit, though, the study leaders included a screening analysis

using alinear programming approach to narrow down the set of factors to be examined inCASTFOREM

[TRAC 2004].

III. SOME STATISTICAL REMEDIES

4

Advanced statistical methodologies have been applied to ameliorate the tradeoffbetween simulation complexity and the number of times a simulation can be executed ina given time period. The two quantities that determine the number of simulationexecutions are the number of cases to be examined and the number of replications to beexecuted for each case. An approach that reduces the number of cases that need to beexamined involves more efficient experimental designs. The most robust experimentaldesign would be a full factorial design,

which considers

every possible combination oflevels and factors. A full factorial design for an experiment with four factors, eachhaving two levels, would require 24

simulation runs. In the case of the PMMA discussedin the previous paragraph, there are 30 munitions being considered, each having two“levels” (in the mix or not in the mix). A full factorial experiment in this case wouldrequire 230

runs

per warfighting scenario used in the study

–

quite unrealistic for asimulation that takes 60 hours perreplication!Theproblem is compounded further

in thecase where multiple levels (representing

quantities

of munitions in the mix)are desired inthe experiment.Recent research into nearly orthogonal Latin hypercube designs [Cioppa2003] has demonstrated that much more efficient designs can achieve nearly the samegranularity as in a full factorial design with far fewer

runs (in some cases, as few as thenumber of factors plus one).

Replications

of a particular case are all run with the same levels for each factorand are performed in sufficient quantity so as to ensure statistically meaningful results.Of course, multiple replications per case are only necessary for stochastic simulations.TRAC normally attempts to achieve 21 replications perCASTFOREM

run,but recentresearch has demonstrated the utility of as few as five replications when a techniqueknown as bootstrapping is applied to the results of each replication [TRAC-Monterey2004].Bootstrapping calls for the resampling of elements from a small set

a requirement of 175 total replicationsfor the study.If we are using CASTFOREM at

60 hours per replication, this equates to1.2years

of simulation time along with the associated effort required to analyze each ofthe 175 replications.

IV. A LOW RESOLUTION ENTITY LEVEL MODELING APPROACH

We contend that a low resolution approach to entity-level simulation cancompliment,

but not replace,

existing high-resolution simulations.

Low-resolutionmodeling approaches are part of the rich tradition of combat modeling and have beenfeatured in aggregate and entity level models for decades. One ubiquitous example inentity level simulation is the use of probability of hit based on weapon type, range, andamount of target presented coupled with probability of kill based on munitions type,target, aspect angle, etc. This approach is widely accepted because it is based onauthoritative data derived from field testing and engineering level

models. Furthermore,this approach is still necessary to obtain reasonable run times in high-resolution entitylevel combat simulations.

Similar probabilistic approaches have been used for other computationallyintensive aspects of military operations that are explicitly represented in high-resolutioncombat models like CASTFOREM today. One example is line of sight. Explicitrepresentation of high-resolution terrain details necessitates computationally expensiveline of sight calculations. In a low-resolution modeling approach we may substituteprobability of line of sight for these calculations. Data to support this approach is derivedfrom experimentation using high-resolution models. If explicit representation of line ofsight is necessary even in thelow resolution simulation to address the issues at hand thena lower resolution terrain representation would still allow for a less computationallyexpensive line of sight calculation. This highlights a critical advantage of a low-resolution simulation approach—the ability to select the appropriate level of resolutionnecessary for the issues under analysis.

6

Low-resolution entity level models can be implemented rapidly and constructed tofocus directly on the analysis questions of interest. Extensive experience over the past sixyears at the Naval Postgraduate School has demonstrated that graduate students canquickly produce useful low-resolution entity level combat simulations to investigate awide variety of issues of interest to combat analysts. These models attempt to representthe salient features of combat necessary to study the phenomenon of interest. Large, high-resolution combat models are, by their very nature, general purpose tools. Recognizingthat the validity of student

models suffered from alack of authoritative data, TRAC haslabored recently to ensure that students conducting sponsored thesis research for theArmy use algorithms and data taken directly from current combat models or deriveddirectly from those models.

V.AGENT-BASED MODELS–

ONELOW RESOLUTION APPROACH

Agent-basedmodels–

a class oflow resolutionapplications

–

are simulationswherein each entity, or “agent,” behaves autonomously, making decisions based oninformation gathered by organic sensors or received via communications links.Originally intended to model complex adaptive systems, Brown,et al,

to represent scenarioswith far less complexity than a typical high-resolution simulation and lend themselves torapid construction of new scenarios, making it possible torun many

more replicationsthan would be possibleusinga high-resolution simulation. This enables the analyst toadequately explore the entire factor space, making it possible to then select interestingsub-spaces for deeper exploration with the limited high-resolution simulation runsavailable. In this way, we see the low-resolutionsimulation complementing the high-resolution simulation, together forming a more robust analysis approach.

Agent-based models

are not without their critics. While any new technology willattract doubters, serious criticism seems to center on two factors:

1) the “black box-like”penalty functions that determine the outcomes of decisions and 2) the inability of theanalyst to trace results back to certifiedsystem performance data. Both of these speak to

7

the model’s validity. Furthermore,mostagent based

models are oriented towards

exploringthe effects ofhuman factors like leadership and moraleon combat outcomes.This limits their utility for exploring many issues of interest in a typical study.

VI.DYNAMIC ALLOCATION OF FIRES AND SENSORS (DAFS)

DAFS is a low-resolution, constructive entity-level simulation framework beingdeveloped by a partnership between TRAC-Monterey and the MOVES Institute of theNaval Postgraduate School. Likeagent-based models, DAFS is designed to enable rapidscenario construction and be fast running, and is likewise suited for use as a screeningtool in support of high-resolution simulations. This capability was demonstrated recentlywhen a prototype version of DAFS was used to support a non-line-of-sight (NLOS)weapons mix study conducted by the Depth and Simultaneous Attack

Battle Lab at FortSill. The results produced by DAFS proved beneficial to informing decisions onappropriate ratios and quantities of mortars, NLOS cannons, and NLOS Launch Systemsfor the Future Force Unit of Action (UA).

DAFS’ major component algorithms are all relatively inexpensivecomputationally. For example, terrain is not represented explicitly, but its effects–

interms of line of sight–

are represented probabilistically. Instead of traditional,computationally expensive, line of sight (LOS) algorithms found in most high-resolutionsimulations, DAFS adjudicates LOS between entities using an experimentally derivedprobability that LOS exists, or PLOS. The actual LOS adjudication is a random drawbased on this probability, saving much computational time.

Target acquisition is also modeled probabilistically in DAFS, currently with twopossible levels of resolution. At the lowest level of resolution, sensors are modeled as“cookie cutters” with probability of detection equal to one within a given radius. Asomewhat higher resolution algorithm models detections as occurring according to a

gamma

distribution with experimentally derivedparameters. In both cases, the datarequired by the algorithm–

be it a detection radius or a detection rate–

are obtain

8

through direct manipulation of certified data (as in the first case) or throughexperimentation in high-resolution simulation (as in the second case).

Finally, munitions effects are modeled with a commonly accepted probability ofkill (Pk) approach, though at a lower level of resolution than one would find in, say,CASTFOREM. For example, factors like weather, obscurants, aspect angle, and damagelevels (mobility kill, firepower kill, etc.) are not modeled currently in DAFS.

Likeagent-based models, DAFS is well suited for conducting screening analysisin support of high-resolution simulations.The PMMA study, for example, is an idealapplication for a screening tool like DAFS. Because it enables quick setup and runningof scenarios, DAFS could have been used in Phase One of the study (instead of the linearprogramming approachchosen) to narrowdown the field of candidate munitions to asmall enough set so as to enable further, more detailed, analysis using a high resolutionsimulation.Unlikeagent-based models, though, all data to support the algorithms inDAFS is either provided directly from certified sources or derived experimentally usinghigh-resolution simulation,

which use

data from certified sources. This provides theability to trace results directly back to inputs–

something not possible withagent-basedmodels.

VII. SUMMARY

In this paper we have identified the analytical challenges

associated withanalyzinga large

parameter space using high-resolution entity level combat simulations.We reviewed statistical techniques to improve experimental designs and reduce thenumber of simulation runs required to explore the space and concluded that thesetechniques are necessary, but not sufficient to address the problem. We recounted howlower resolution models including agent based models have been used to explore a largeparameter space and thereby focus the high resolution model on parameters and settingsof particular interest. We also noted that many agent based models have limited validityand focus on human factors in combat limiting their utility as a screening tool for manyanalytical questions. We proposed and outlined the use of low-resolution entity level

9

combat models as companions to high-resolution entity level simulation. Central to thisapproach is modeling in the low resolution simulation the salient aspects of combatrequired to answer specific analytical questions of interest, and deriving scenarios,algorithms and data from the high resolution model and other authoritative sources.Finally, we provided an example of the successful application of this approach withDAFS.