A primer on using Monte Carlo simulations to evaluate marksmanship

Adam Biggs (US Navy, La Mesa, California, USA)
Joseph Hamilton (Hamilton Strategic Solutions, Midlothian, VA, USA)

Journal of Defense Analytics and Logistics

ISSN: 2399-6439

Article publication date: 23 October 2023

Issue publication date: 16 November 2023

282

Abstract

Purpose

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle differences can be challenging. For example, is a speed difference of 300 milliseconds more important than a 10% accuracy difference on the same drill? Marksmanship evaluations must have objective methods to differentiate between critical factors while maintaining a holistic view of human performance.

Design/methodology/approach

Monte Carlo simulations are one method to circumvent speed/accuracy trade-offs within marksmanship evaluations. They can accommodate both speed and accuracy implications simultaneously without needing to hold one constant for the sake of the other. Moreover, Monte Carlo simulations can incorporate variability as a key element of performance. This approach thus allows analysts to determine consistency of performance expectations when projecting future outcomes.

Findings

The review divides outcomes into both theoretical overview and practical implication sections. Each aspect of the Monte Carlo simulation can be addressed separately, reviewed and then incorporated as a potential component of small arms combat modeling. This application allows for new human performance practitioners to more quickly adopt the method for different applications.

Originality/value

Performance implications are often presented as inferential statistics. By using the Monte Carlo simulations, practitioners can present outcomes in terms of lethality. This method should help convey the impact of any marksmanship evaluation to senior leadership better than current inferential statistics, such as effect size measures.

Keywords

Citation

Biggs, A. and Hamilton, J. (2023), "A primer on using Monte Carlo simulations to evaluate marksmanship", Journal of Defense Analytics and Logistics, Vol. 7 No. 2, pp. 138-155. https://doi.org/10.1108/JDAL-10-2022-0008

Publisher

:

Emerald Publishing Limited

Copyright © 2022, In accordance with section 105 of the US Copyright Act, this work has been produced by a US government employee and shall be considered a public domain work, as copyright protection is not available


1. Introduction

Data is often a focal point in military discussions given the robust infrastructure and enormous yearly fiscal investments. Nowhere is this debate more important—or more ambiguous—than when evaluating warfighter performance. For example, are warfighter performance and military performance distinct concepts, or parts of the same construct? Also, what data should be used to evaluate performance? These questions ostensibly require utilizing data to inform decision-making, which led to foundational studies in the field of operations research (Morse and Kimball, 1951). At its core, operations research and related fields apply mathematical models to extract more actionable interpretations from prior observations (Strickland, 2011). For example, vulnerability and lethality analyses are critical issues for a combat force that could be influenced by everything from human performance to systems engineering (Kincheloe et al., 2009). There are so many variables involved in warfighter performance and so much available data that this particular operations research application has received enormous attention.

Often, combat modeling is a method to quantify the impact of different variables on warfighter performance. The perennial challenge in combat modeling, however, remains identifying the essential variables that would allow models of armed conflict to be manageable, meaningful and useful (Kress, 2012). As one example, a small unit combat model may require both maneuver warfare elements (Nohel et al., 2022; Ormrod and Turnbull, 2017) and physical fitness of the personnel (Blount et al., 2013). Different personnel would likely have different movements and different physical limitations while encountering different levels of fatigue. These inclusions thus illustrate how small arms combat modeling could easily encompass dozens of different, independent actors in a squad-on-squad simulation that must account for individual decisions while moving across varied terrain. Even this simple variant of combat modeling thus imposes numerous challenges on any modeling and simulation effort.

Within combat modeling, there is a critical aspect of performance supported by some of the most well-documented measurements throughout military organizations—marksmanship. Many infantry and special operations discussions involve marksmanship performance as a variable with clear military relevance. Moreover, marksmanship seems to have enormous simulation value as this human performance skill can be readily quantified with enormous precision in both speed and accuracy. These factors should make the variable highly appealing to modeling efforts. Despite these advantages, incorporating marksmanship creates more challenges than simply uploading tables of human performance observations. Notably, how do you evaluate performance when someone is a half-second faster on one drill and another person has ten percent higher accuracy on the same drill? Speed/accuracy trade-offs are inherently difficult, with arbitrary weighting systems regularly used to assign relative value to speed and accuracy or controlled drills holding one factor constant to measure another. Marksmanship tables then become used as a surrogate standard for performance. Such point-based outcomes belie the complexities of a combat engagement—a two-point improvement from a marksmanship table does not adequately capture the combat advantage nor easily convey this information to military decision-makers. Thus, despite the importance of data in evaluating militarily relevant outcomes, there is a clear need for advanced modeling techniques in marksmanship that can deliver more concise and compelling interpretations for military decision-making.

Monte Carlo simulations are one alternative to bridge the gap between human performance observations and actionable applications to marksmanship training doctrine. Monte Carlo simulations have been suggested to explore the effectiveness of weapon strikes (Chusilp et al., 2014; Hu and Wang, 2013), including weapons applications for platforms to include small arms combat (Mihaylov, 2017). In this sense, the technique utilizes known distribution patterns in accuracy to evaluate the operational effectiveness of different weapons systems. Variance itself then becomes a factor represented in simulations. The application is also not novel to small arms combat as Monte Carlo simulations were first applied to Vietnam War-era modeling in describing small unit activities (Adams et al., 1961; Bonder, 2002; De Laquil, 1980; Monahan and DuBois, 1979) [1]. However, small arms combat applications typically involved special operations forces actions such as theft of nuclear material. Monte Carlo simulations, during the Vietnam War-era, required comparatively substantial computing power given the available hardware at the time. Modern computing power no longer has such limitations as the majority of computers could run complex Monte Carlo simulations relatively quickly. In turn, the application of Monte Carlo simulations to small arms combat has received renewed interest (Biggs et al., 2023), and there is a clear opportunity for this computational technique to advance practical marksmanship applications.

The goal of this paper is to explore how Monte Carlo simulations can be applied to evaluate marksmanship. Each section explores a particular aspect of this larger topic with a focus on practical applications. The first topic is an examination of the assumptions inherent to an evaluation of warfighter performance to provide sufficient context for the discussion. Any such assumptions become critical both to the Monte Carlo simulations and how the outcome should be interpreted in terms of marksmanship. Next, an overview of the Monte Carlo technique provides definitions and context to the basic method. Subsequent sections address an additional variant of the Monte Carlo technique, such as the introduction of Markov Chains or a multilevel Monte Carlo. To ensure military relevance of the discussion, each section is divided into the basic overview of the mathematical modeling, the militarily relevant applications of the specific technique and the implications for marksmanship evaluations. This discussion thus serves as a primer for anyone interested in enhancing marksmanship data collection and presentation.

2. Assumptions inherent to an evaluation of warfighter performance

Any Monte Carlo simulation is defined first and foremost by variables and associated data supporting the model and its simulation. Economic simulations use financial variables (Arnold and Yildiz, 2015; Mun, 2006), whereas biological simulations may use energy transfer or molecule movement (Berney and Danuser, 2003; Leblanc et al., 2003). The first consideration is therefore the data that supports model variables—and the assumptions that go with the data. Military applications are no different. In this case, the assumptions largely revolve around the type of engagement that can be simulated. Despite the myriad of tactical aspects in a military engagement, here we will describe three broad categories: (1) determining an outcome, (2) the specific scenario and (3) data granularity.

The first factor relates directly to the question asked in the introduction. Namely, is there a difference between warfighter lethality and military performance? In simple terms, lethality describes a subset of military performance specific to use of force against an adversary. All measures of warfighter lethality are measures of military performance, but not all measures of military performance directly address lethality. For example, there are robust evaluations of military physical fitness (Cuddy et al., 2011; Roy et al., 2010; Taylor et al., 2008), and while physical fitness will likely impact performance in combat, faster run times do not immediately translate into increased lethality. Physical fitness requires additional integration to fully impact a simulation of tactical performance among warfighters (Blount et al., 2013).

If warfighter lethality is the measure of performance, assumptions made during data collection should be known and well-documented. The first and most important assumption is how accuracy becomes translated into lethality. Lethality can be measured following combat as number killed-in-action, but training and evaluation applications will likely extract lethality metrics from shot placement on a target. Measuring lethality thus becomes a critical assumption underlying any data collection, which will likely occur on a military range. While measuring lethality, target type is a critical factor. Photorealistic targets may have scoring zones that differentiate lethal hits, non-lethal hits and misses. These scoring zones may approximate body sections similar to the Abbreviated Injury Scale (Civil and Schwab, 1988; MacKenzie et al., 1985; Palmer et al., 2016), although the scoring zones may also be approximations drawn onto the target by instructors rather than carefully constructed, medically-inspired applications (Biggs et al., 2021). The targets might also have bullseyes or be made from steel. Lethality must be extrapolated from bullseye targets based upon a ring line designated to be a lethal scoring zone. By comparison, steel targets likely only denote a hit or miss, in part because these targets are more often used at greater distances. In all cases, lethality is extrapolated from marksmanship performance through shot placement on these targets. The conditions of a lethal and non-lethal outcome mark the first assumption when evaluating military performance using a Monte Carlo simulation.

The next assumption involves the specific scenario. When limited to a “combat engagement,” there are many factors to consider, including: number of units involved, number of personnel in each unit, training quality of each unit, distance from target, weapons involved, physical capability, air support, mechanized vehicles, weather and terrain. A combat engagement will be unique by virtue of the many factors specific to the engagement. Further limiting the scenario to marksmanship, there are many different factors, including: starting position of the shooter, posture adapted to take the shot, weapon optics, wind conditions, ammunition, whether the shot is the first in a string, number of shots to be taken and criteria for a successful hit. Furthermore, modeling such scenarios is limited by the data on which it is based. It would be inappropriate to model an engagement at five hundred meters based on speed and accuracy data collected from a drill where participants drew and fired one round at a target from seven meters. Scenario factors during data collection inherently limit any model based on collected data.

A third category of assumptions inherent to warfighter lethality, data granularity (as well as data fidelity) is the one that most directly impacts a Monte Carlo simulation. Lethality can be interpreted based on a given marksmanship exercise, but the fidelity and granularity of data impact the fundamental character of the modeling. For example, an exercise with only one successful hit prohibits modeling variance at an individual level. Without an ability to account for individual variance, group dynamics become the most granular level of analysis. Data fidelity also applies to the data collection from particular drills. For example, if a shooter is using a rifle at a great distance, it is possible that the simulation will end with the shooter running out of ammunition before hitting the target. Reloading becomes a factor that would add time to performance during the simulation, but unless a reloading drill was conducted as part of the exercise, reloading cannot be incorporated accurately as the marksmanship evaluation never included a reloading measurement. This broad category of assumption thus requires an operations research analysis prior to any mathematical modeling. The goal should be to identify all relevant factors in the process that influence speed, accuracy and their variances.

Each of these three categories—how the outcome is determined, the specifics of the combat engagement to be simulated, and fidelity/granularity of collected data—represents a critical aspect to any subsequent modeling and simulation effort. These assumptions must be well-documented when conducting any simulation even if the precise conditions are not included when presenting an outcome. This latter possibility can occur when briefing senior military leaders if time does not permit a full disclosure of all details underlying an analysis. Even so, it is important to ensure all assumptions are clearly documented for transparency. The next section describes how best to utilize the Monte Carlo technique as a means to convert raw performance metrics into measures of warfighter lethality.

3. Monte Carlo simulations

3.1 Basics of the technique

Monte Carlo simulations were developed as part of research into nuclear weapons at the Los Alamos National Laboratory (Metropolis and Ulam, 1949; Rubinstein and Kroese, 2016). The history, and the namesake relation to a casino in Monaco, dates back to a hypothetical question concerning the game of solitaire (Gass and Assad, 2005). The basic premise is that a complex problem with many different factors can be solved in approximation by simulating many possible events and observing the different outcomes. Its original intended usage involved estimating neutron diffusion as a part of nuclear weapons research. Given that premise, and the association with nuclear weapons development, the technique required a code name due to the classified nature of the research—Monte Carlo, a casino in Monaco.

Monte Carlo simulation has been defined in various ways, leading to some inconsistencies. One possible definition provides some clarity by delineating between a simulation, the Monte Carlo method and a Monte Carlo simulation (Sawilowsky, 2003; Sawilowsky and Fahoome, 2003). Simulations mimic elements of a particular situation as a means to evaluate how different factors will impact the outcome. Marksmanship simulations may involve the likelihood of a round striking the target given the accuracy of the shooter, size of the target and distance from the target. In contrast, the Monte Carlo method is one technique to estimate the solution to a problem through repetition. Stochastic techniques use known or estimated variance of different parameters to determine an outcome through a sample. If one shot may be evaluated as part of a simulation, then the Monte Carlo method measures accuracy by having a shooter fire ten shots and identifying how many hit the target to determine accuracy. A Monte Carlo simulation effectively blends the two, using a large number of simulations through the Monte Carlo method to evaluate the likelihood of potential outcomes. Determining the likelihood of one shooter hitting a target more often than another shooter based on some set of factors would be a Monte Carlo simulation. In short, a simulation is the basic method of sampling each individual event, the Monte Carlo method is the act of estimating an outcome through repeated sampling of a known or proposed distribution, and a Monte Carlo simulation determines risk or probable outcomes by sampling a large number of pseudo-random variables with known or assumed distributions.

General applications of the technique have been used for many different purposes. Sports performance has used Monte Carlo simulations to estimate basketball shooting (Min, 2016) or to determine a baseball batting order (Freeze, 1974). An exceptionally common usage is in risk assessment, on topics ranging from ecological exposure of chemicals in the environment (Burmaster and Anderson, 1994) to construction outcomes (Sadeghi et al., 2010). These diverse applications should be noted in light of the corresponding limitations. The data-intensive nature of Monte Carlo simulations requires existing evidence to conduct the simulations (Ferson, 1996). A major advantage of Monte Carlo simulation is the ability to assign probabilities to different outcomes, when uncertainty is a central feature of the processing model. At relatively low cost, it is possible to demonstrate how changes in the assumptions or distributions of various parameters change the distribution of the outcome variable. This approach thus provides a tangible method to appreciate how a given change might directly or indirectly influence the probability of a subsequent event.

3.2 Application to warfighter performance

Monte Carlo simulation, when used to estimate performance, presumes that we can transform basic military performance metrics into a simulation of warfighter performance (see Table 1). Typical military uses examine whether weapon strikes effectively damage or disable a target (Chusilp et al., 2014; Hu and Wang, 2013). Here the intent is to use performance metrics to simulate a combat engagement (cf. Biggs and Hirsch, 2022).

Scenario specifics are the boundaries for the simulation. For example, consider the basic marksmanship metrics of speed, accuracy and variance as collected from two different shooters when simulating a head-to-head gunfight. Monte Carlo simulation uses observed metrics to form performance distributions that serve as parameters to the simulation. Each individual simulation depends on the speed and accuracy of a shot as sampled from the model distributions.

Sampling from these distributions yields four possible outcomes between Shooter A and Shooter B: (1) Shooter A wins outright, (2) Shooter B wins outright, (3) a lethal draw where both shooters fire a lethal round and (4) a non-lethal draw where neither shooter fires a lethal round. It is possible to further segment the outcomes, but these four are an adequate representation of the possibilities. Winning occurs when one shooter fires a lethal round faster than the other shooter, or one shooter fires a lethal round when the other shooter missed. Non-lethal draws occur when both shooters miss, and lethal draws occur when both shooters fire a lethal round in such close proximity that the rounds would pass in the air. The latter outcome depends upon a latency parameter, or duration the bullet would need to travel over the given distance between the two shooters. Thousands of samples convert raw marksmanship metrics into these four outcomes with a percentage representation of each. In short, the individual shooter chance of winning a gunfight is quantified based upon the relative differences in speed and accuracy.

There are several major advantages of presenting performance data as a percentage chance of winning a gunfight rather than a raw marksmanship metric of speed on a given drill. First, relative performance differences are immediately evident when the data is presented as a 72% chance of winning the fight rather than a speed difference of 300 milliseconds in drawing a weapon. Second, the technique avoids assigning arbitrary points or a weighting system when evaluating performance on a given drill. Instructors often debate the merit of one drill over another, which is how marksmanship tables are designed with points arbitrarily weighted to aspects of the associated exercises. The Monte Carlo simulation effectively determines the relative influences of speed and accuracy in the given situation without resorting to arbitrary weighting differences (assuming the simulation has been well constructed and accurately reflects a combat situation). Third, this approach embraces variance as a key part of performance. Even if an individual wins a shooting competition today, there is no guarantee that the individual will win a competition with the same opponents tomorrow. Incorporating variance represents performance as a continuum of possibilities, which better addresses the changing day-to-day realities of human performance. Fourth, there is the value of simplicity in presenting the data. Endless tables are necessary when relating the value of different training regimens or equipment given the myriad of ways data can be collected. A Monte Carlo simulation allows for a simple head-to-head comparison that supports military decision-making by producing a quantifiable assessment of different programs, whether they involve different equipment, different units or different training prior to the evaluation. Any scenario involving a comparison of data could potentially use the Monte Carlo technique.

Of course, the comparative nature of this head-to-head warfighting simulation is also the primary weakness. These performance outcomes cannot be simulated without a comparison group, and while a given standard of expected peer or near-peer performance can be established, the outcomes can be unduly biased by the selected standard. One unit may seem to excel based on the simulated opponent rather than their inherent skill set. This limitation is the primary concern in applying a Monte Carlo simulation to evaluate warfighter performance. Another concern is the basic nature of the Monte Carlo technique when used without additional considerations. Warfighter performance can be simulated as a single shot from a head-to-head outcome—one shooter fires upon another shooter and the engagement ends. At close proximity, where accuracy is high, speed is likely to be the determining factor and many engagements will have a victor in only few shots (Biggs and Hirsch, 2022). At greater distances, where accuracy is lower, a single shot simulation will produce a high number of non-lethal draws, or indeterminate outcomes. Combat engagements, however, will not end after a single unsuccessful shot. This shortcoming is not a failure of the Monte Carlo method in general, but rather a demonstration that more depth and context is needed to model performance than a single-shot outcome. Engagements continue well after the first shot, and so while a Monte Carlo simulation can evaluate warfighter performance, additional information is required to fully simulate a combat engagement. These additional considerations can utilize the other additions to the Monte Carlo technique that have been devised over years of research and exploration.

3.3 Implications for marksmanship evaluations

Monte Carlo applications, even the simplest form of the technique, provide a key advantage to marksmanship evaluations. Specifically, Monte Carlo simulations provide a method to integrate speed and accuracy in a meaningful way without compromising the relative contribution of either. Both speed and accuracy can contribute to a marksmanship evaluation by sampling the speed of shots and shot accuracy in a Monte Carlo simulation. Nevertheless, there are other methods that similarly utilize speed and accuracy in marksmanship evaluations. Competition shooting attempts to achieve this integration by using hit factor, which divides points (or accuracy) by time. Monte Carlo simulations provide an advantage over hit factor by allowing variance to be represented in the simulation, whereas hit factor only accounts for performance at particular point in time on a particular course of fire. This measurement makes hit factor scoring immensely practical for competitions that need to rank order performance on a particular day. Combat modeling projections, on the other hand, should incorporate variance as an appreciable component of human performance measurement. Monte Carlo simulations can incorporate variance based on how speed and accuracy are sampled through both means and standard deviations derived from observable human performance measurements.

Despite the clear advantage of Monte Carlo simulation over hit factor scoring, there remain limitations to the technique. Most notably, Monte Carlo simulation falls into a class of numerical analysis simulations and therefore represents a fundamentally distinct category compared to other simulations that measure behavior over time (Birta and Arbez, 2013). This characteristic would make simple Monte Carlo simulations a valuable technique for interpreting the relative performance implications of marksmanship tables. After all, performance on a marksmanship drill provides input on speed, accuracy, or both for a number of marksmanship behaviors. The challenge for combat modeling, and indeed for discrete-event simulation (Günal and Pidd, 2010; Misra, 1986), involves incorporating how behavior might change over time. Marksmanship performance can be influenced by any number of cognitive and physiological variables that might influence the outcome (Rao et al., 2020). A simple application of the Monte Carlo technique presumes unchanged capabilities in the system, and when the “system” is the human weapons system—conceived as a combination of human, weapon and ammunition—performance should be expected to change. Nonetheless, a simple Monte Carlo technique represents a substantial contribution to marksmanship evaluations over points-based applications or hit factor scoring because it provides a method to easily integrate speed, accuracy and variability into the evaluation.

4. Markov Chains and Monte Carlo simulations

4.1 Basics of the technique

Monte Carlo simulations determine probable outcomes by sampling distributions from multiple variables. Each added sample adds some depth and context to the potential outcomes. Although this approach has value in communicating the relative differences of marksmanship performance beyond raw metrics of speed or accuracy, a single shot outcome does not truly simulate a combat engagement. This basic approach can be layered by sampling multiple performance aspects from a sequence of events to give a more holistic view of performance. The simulation should therefore involve multiple shots, and ideally, multiple personnel on both sides. Moving beyond a single shot requires simulating a sequence of events rather than a single outcome. Thus, a better simulation of warfighter performance invokes a Markov Chain to enable multiple events in the sequence.

A Markov Chain describes a sequence of possible events where each individual event depends upon the probabilities of different outcomes and the current state of affairs (Gagniuc, 2017; Roberts, 1996). There are multiple types of Markov Chains, although they can be divided along lines such as discrete-time chains or continuous-time chains (Coolen-Schrijner and Van Doorn, 2002; Craig and Sendi, 2002; Spedicato, 2017; Suchard et al., 2001). The difference is whether the sequential events occur as step-by-step outcomes or in a continuous time space. As such, Markov chains critically depend upon how transitions between states are defined. One classic example is the drunkard's walk (Diaconis, 1996). From any position on the number line (e.g. 7), the position might go up one (8) or down one to the next integer (6), which can continue indefinitely. The next outcome in the chain depends upon the probability of the number increasing vs decreasing and the current number in the sequence. A more familiar example involves board games that use dice. The next state of the board depends upon a dice roll, which is memoryless—the probabilities of one dice roll are the same as the next. These possibilities are also determined by the current state of the board, which is the product of previous dice rolls in the sequence. Any potential configuration is the outcome of a Markov Chain that led to the current state on the board.

The memoryless nature of probabilistic determination is a defining component of Markov Chains. As such, a true Markov Chain does not alter transition probability based on previous events. Board games with dice are a good example because the dice rolls are independent. By comparison, a card game such as blackjack would not constitute a Markov Chain. The next potential card draw is not independent of the previous card draw as the probabilities have changed based on the previous events. So, there is a subtle, theoretical point about whether modeling a sequence of events is a true Markov chain which depends upon unaltered probabilities for outcomes in the current state based on outcomes from the previous state.

When combined with the Monte Carlo simulation technique, the result is a Markov Chain Monte Carlo (Brooks et al., 2011; Geyer, 1992; Gilks et al., 1995). A Monte Carlo simulation determines outcomes by sampling randomly from different variable distributions of variables, which are combined to determine an outcome. So, each outcome depends upon the Monte Carlo process and the variable distributions in a complex way. The introduction of a Markov Chain links multiple events to produce an outcome that depends upon multiple stages rather than a single event. Several different algorithms define different nuanced points to this method, such as the Gibbs Sampling algorithm (Gelfand, 2000; Geman and Geman, 1984) and the Metropolis–Hastings algorithm (Chib and Greenberg, 1995; Hastings, 1970). Each algorithm defines one method of constructing the Markov Chain, and the appropriate use for a particular method depends on the specific scenario and the data being sampled.

4.2 Application to warfighter performance

Monte Carlo simulations allow for probabilistic sampling of individual shots and actions that can accumulate throughout the engagement. The introduction of Markov Chains enables moving beyond simple head-to-head engagements including only a single shot and into complex scenarios with prolonged engagements. However, Markov Chain Monte Carlo methods typically indicate a class of algorithms used to sample from some probability distribution. When applied to warfighter performance, the intent is to use a Markov Chain to simulate a series of behaviors rather than a single behavior. For example, a first shot time includes multiple behaviors such as locating the target, manipulating the weapon into firing position, aiming and trigger squeeze. All these individual behaviors become encapsulated in the first shot time. The intent of a Markov Chain is to break performance into a series of behaviors to include factors such as reloading and acquiring a new target. This introduction allows a complex engagement with multiple shots and multiple personnel to be simulated in small arms combat modeling. In this application, the goal is to sample from distributions built on observations of actual human performance. Markov Chains create the sequence of behaviors and Monte Carlo simulations determine transition, but the process is not technically a Markov Chain Monte Carlo method.

Additionally, multiple shooters make the scenario more militarily relevant as it allows for modeling of squad-level engagements, which are a far more practical application than a fabricated dueling scenario. Perhaps the most important addition involves risk exposure. Squad-level engagement models use the estimated casualties suffered by both sides as a means of interpreting operational risk based on different skill levels. The percentage chance of winning the engagement is accompanied by the level of risk assumed to achieve this victory.

Furthermore, squad-level engagements—as with any Monte Carlo simulation of warfighter performance—depend upon the data from which the simulation can be modeled. Each step contributes to the next possible outcome, such as how different shot characteristics can be modeled for the first shot vs an inter-shot interval. A first shot incorporates procedures such as visual search for the target or distance calculation in initial aiming behaviors, whereas inter-shot interval may be the time between subsequent shots that incorporates different aiming behaviors or omits the visual search step. Multiple shots incorporate a wider range of marksmanship behaviors and military performance that makes the entire simulation more holistic and realistic. However, modeling this distinction requires data that differentiate the first shot from subsequent shots.

Multiple shooters complicate this scenario because shot characteristics also involve assumptions about the nature of target selection and how the engagement terminates. The simulation could proceed as a series of head-to-head outcomes, or target identification and selection could proceed from a tactical instruction. In either case, the Markov Chain nature allows for shooters to continue in the engagement after eliminating a member of the opposing force. The final outcome depends upon a termination rule. Simulations could continue until the entire opposing force is eliminated, or the model could enact a retreat rule where the losing force will withdraw following some level of casualties suffered. The casualty estimates support the risk estimation component of the simulation.

This proposed squad-level engagement could be modeled by either a continuous or discrete Markov Chain, although a continuous process is preferred. A discrete approach requires a combat engagement to be represented as arbitrary stages. One head-to-head outcome might need to wait for resolution before the victorious shooter could continue in the engagement. In this case, a tournament style approach might be necessary in the model rather than the ongoing, unstructured nature of actual combat. A continuous approach allows a victorious shooter to immediately engage a new enemy while the other shooters continue their head-to-head fight. Discretization undercuts this fluid nature while continuous processes more closely mimic the fluid nature of combat. A discrete process could be necessary if speed information about individual shot performance does not enable the continuous approach. In these cases, accuracy-focused outcomes allow a squad-level engagement to proceed as discrete stages based upon the accuracy of each individual shot.

4.3 Implications for marksmanship evaluations

For marksmanship, the Markov Chain component introduces several potential advantages. Multiple behaviors can be evaluated in sequence. Marksmanship tables are notorious for integrating multiple behaviors into a single outcome, where performance on a drill is summarized as time or accuracy under a given set of conditions. These restrictions are often imposed to ensure safe and reliable evaluations on a live fire range. These limitations may also inhibit modeling efforts. For example, reload speed at a known distance in a known drill against a known target inherently creates predictability that no longer resembles a combat exercise. This aspect applies to virtually all live fire data collection, often driven by safety issues or related procedural complications. The sequential nature of Markov Chains supports more effective modeling using more granular information about behaviors from marksmanship drills. If using a pistol, an appropriately sequenced Markov Chain could simulate the draw from holster, first shot aiming behaviors, recoil control and even reload speed. Additionally, sequences between individuals in a squad can be modeled accurately. Squad-level behaviors in a Markov Chain could include shifting between targets or caring for wounded allies.

These ideas demonstrate how a Markov Chain may support more effective modeling, but this value can only be unlocked with effort that extends beyond the modeling effort. Specifically, this advantage requires appropriately segmenting the behaviors during human performance measurement. This aspect highlights a key bridge between test design and the modeling effort. Integrating personnel with modeling expertise into the test design stage helps ensure that subsequent simulations can precisely emulate the intended behavior from the available data. Thus, the Markov Chain can support marksmanship by allowing practitioners, both as trainers or as analysts, to reach a common ground prior to data collection that will best support eventual decisions.

5. Multilevel Monte Carlo simulations

5.1 Basics of the technique

Fidelity of data is a critical concern for any Monte Carlo simulation. A successful Monte Carlo technique requires sampled distributions to accurately represent actual military performance. Of all the assumptions in the Markov Chain Monte Carlo technique, the assumption of valid data is perhaps the most insidious. Every sampled data distribution presents an opportunity for error. Moreover, different assumptions can vastly impact the requisite computing time for a single simulation. Given that thousands or millions of simulations need to be conducted in Monte Carlo techniques, the complexity of each individual simulation represents an important concern for the process.

The multilevel Monte Carlo technique helps address this concern by addressing the quality of the data collected (Giles, 2008, 2015). Repeated sampling remains the basis of the technique, via Monte Carlo simulation, yet the technique embraces a volume/fidelity trade-off when accounting for cost. Samples with low cost are taken at high volume, whereas samples with high cost are taken at low volume. There is a particular advantage in reducing computational time for uncertainty quantification, which tries to determine outcomes when there are significant unknowns in the simulation (Cliffe et al., 2011; Heinrich, 2001; Kebaier, 2005). The reduction of required computational effort is a key component of the multilevel method with valuable applications in areas such as evaluating option pricing using stochastic differential equations (Evans, 2012; Gobet et al., 2005).

5.2 Application to warfighter performance

Although the multilevel method is primarily a mathematical solution aimed at reducing computational effort, the volume/fidelity cost trade-off is fundamental to warfighter simulations. Complex simulations require measurement of many different variables that can be difficult to collect at volume. Shot accuracy can be collected easily at high volume, but shot speed can be difficult to collect from many different shooters using currently available acoustic timers. Alternatively, the sample may estimate speed by taking the time a shooter needed to complete ten shots and dividing to get an average. This estimate is flawed because the behaviors involved during an initial shot vs an inter-shot interval have fundamentally different aiming calculations, but to overcome logistical challenges, this lower fidelity solution could be used.

5.3 Implications for marksmanship evaluations

For marksmanship, it is important to identify where data can be collected at acceptable volume, fidelity and cost. Accuracy data thus represents low investment information that can be collected in high volume. Every shooter from an entire military company could be sampled for accuracy. Meanwhile, speed metrics will be more difficult to collect because only one shooter at a time can fire to avoid overlap in the acoustic shot timing. Thus, the sample of shooters is likely to be much smaller for speed, potentially leading to a lower fidelity representation than for accuracy. This difference in fidelity must be managed carefully in the model to avoid biasing the possible outcomes. For example, if a subset of shooters is going to represent the entire group during sampling, the best shooters might be selected to provide the data. The result would be a biased sample that skews the modeled capabilities of the unit. This must be avoided, if possible, to ensure the simulation accurately depicts warfighting capability.

Another aspect involves the complexity of the simulation. Speed and accuracy can be collected from particular distances, but there are other factors that could be represented during simulation that are more difficult to collect. For example, moving between multiple firing positions requires forward motion on a live fire range, which can be logistically difficult to collect on some ranges or especially with larger forces. Thus, factors supporting simulated movement during the engagement might need to be sampled with high fidelity among a few personnel rather than sampled across all participants. More generally, for a complex Monte Carlo simulation of warfighter performance, factors related to transition states might need to be measured in smaller samples with high fidelity rather than in large samples across the entire force. These transition states may include moving between firing positions, reloading, and clearing weapon malfunctions. Their inclusion allows for a more complex simulation that fully embraces the wider range of behaviors within a combat engagement.

6. Kinetic Monte Carlo simulations

6.1 Basics of the technique

Monte Carlo simulations are particularly useful for risk analyses, but their applications also include explorations of how certain process evolve over time. One particular variant, the kinetic Monte Carlo simulation (Battaile, 2008; Voter, 2007; Young and Elcock, 1966), allows for simulations of growth and change as a system adapts or impacts a given environment. This process has a wide range of applications in chemistry and molecular physics with functions such as modeling crystal growth (Gilmer and Bennema, 1972; Kotrla, 1996), radiation damage (Domain et al., 2004), surface growth (Lou and Christofides, 2003; Whitesides and Frenklach, 2010) or evaporation (Gruber et al., 2011). The approach differs from other Monte Carlo techniques due to the timeline and scale. Modeling atomic growth or radiation damage requires a large-scale evaluation given the sheer number of atoms or molecules involved and the time scale of associated growth. Kinetic Monte Carlo simulations add this scale and scope to the process in a way above and beyond the alternative methods. As with the other methods, there are multiple algorithms described in the literature (Gillespie, 1976; Meng et al., 2010; Sanz and Marenduzzo, 2010).

A critical element of kinetic Monte Carlo simulations is that they can be broadly divided into two categories, rejection-free or rejection-based. Rejection-free simulations can be more time consuming as they have to calculate all possible transition states in the simulated system (Bortz et al., 1975; Schulze, 2008). Conversely, rejection-based algorithms sample from the given distributions while rejecting some subset of possible events based on assigned criteria (Schulze, 2008). Each approach has advantages and disadvantages to the calculations and data involved. Given the large number of calculations and scope involved in a kinetic Monte Carlo, considering whether some events should be rejected is a matter of both data fidelity and computing power.

6.2 Application to warfighter performance

Kinetic Monte Carlo simulations address a potential aspect of warfighter performance that the previous methods did not—namely, the large-scale nature of the simulation. The previous examples provide avenues for converting raw marksmanship data into combat simulations. Although these techniques are suitable for head-to-head or squad-level simulations, a battle can involve significantly larger numbers of troops. Actions of individual squads generate larger consequences in battlefield dynamics that cannot be readily ignored. Large-scale simulations must incorporate these additional considerations and the myriad of other factors that could be involved, ranging from artillery and air support to modeling the potential for panic and retreat.

6.3 Implications for marksmanship evaluations

Combat simulations hinge on actions of individual personnel with implications that resonate throughout the larger force. Simulating a full-scale battle requires many different simulations of these more local interactions. A kinetic Monte Carlo approach can incorporate the local outcomes of individual events as indicated by Markov Chain and Monte Carlo simulations into a more global model that simulates transitions between various states of the battlefield. Thus, Monte Carlo simulation applies to modeling a combat engagement with different variants of the techniques applied at different levels of combat. Marksmanship is a suitable variable to represent the performance of the individual warfighter, but additional details describing the transitions between states become an important consideration when modeling the evolving nature of a combat engagement.

7. Conclusion

When presenting information to military decision-makers, there is a critical need for the communication to be as clear and concise as possible (for a thorough discussion, see NATO, 2002; Tolk, 2019). Monte Carlo simulations are one possible way to convert raw performance metrics into appreciable differences in warfighter performance. Marksmanship metrics of speed and accuracy are translated into a percentage chance of winning a combat engagement and the number of casualties suffered to earn the victory. This approach circumvents many of the debates about arbitrary points or weighting of different drills in favor of simulating the intended end state as the drills would apply to combat marksmanship. Decision-makers receive a quantifiable evaluation that can be used to compare warfighter performance based on different equipment, training regimens or other model inputs.

The purpose of this review was to provide insight into different Monte Carlo techniques and how they might be applied to a warfighter simulation (see Table 2). In general, Monte Carlo simulations are a broad constellation of mathematical modeling techniques and algorithms rather than a well-defined set of procedures. These simulations can be summarized as a method to utilize randomness or pseudo-random variables to estimate the likelihood of different outcomes. Although there are no hard and fast criteria to what defines a Monte Carlo simulation, there are some characteristics of a high-quality Monte Carlo simulation (cf. Sawilowsky, 2003):

  1. Any random sampling number generator is truly random.

  2. The number of simulations is sufficiently large to ensure the results adequately capture the different possibilities.

  3. An appropriate technique is selected from the myriad of options based on what the modeling process or algorithm attempts to model.

  4. The data available can adequately represent the scenario being simulated.

  5. Simulations will benefit warfighter performance evaluations most when the available data includes both speed and accuracy metrics with means, variance or a knowledge of the underlying distribution to adequately represent human performance.

Despite the numerous advantages provided to marksmanship evaluations by variations of the Monte Carlo method, there are several limitations not discussed here that should be mentioned as they pertain to future work. Much of the Monte Carlo advantage hinges upon integrating variance into the marksmanship evaluation. Although this technique advances the marksmanship evaluation, quantifying uncertainty is itself a process with many complications (Smith, 2013; Xie et al., 2014). Combat can be understood as a complex system-of-systems process (cf. Shi and Zhang, 2020). Variance itself within this complex system becomes a complication that must be estimated. Accurate modeling also requires an accurate estimation for the shape of any underlying distribution, which requires understanding how the distribution shape itself might be another complexity that affects outcomes. Thus, while there are advantages to the Monte Carlo method, for both marksmanship and warfighter performance, the applications require additional effort. Future work will have to explore many of the nuanced issues involved when simulating human performance in combat models.

Ultimately, any assessment of warfighter performance will only be as strong as the data available, the design of the simulation and the assumptions used during the modeling process. Observation of human performance is a critical component because the conditions under which data is collected affect how that data can be used. If you tell someone a miss can still earn points vs a miss will cost points, behavior becomes affected by the scoring implications. Therefore, no matter how good the modeling efforts may be, modeling efforts cannot take full advantage of poorly collected data. Additionally, these assumptions and the process should always be clearly documented for clarity. If properly conducted and documented, however, Monte Carlo simulations provide an excellent option for interpreting and presenting military data. Future military uses of these techniques should take advantage of the potential inherent to this modeling method as a way to clearly communicate the possibilities of a combat engagement.

Examples of the primary assumptions made during any data collection involving warfighter performance

AssumptionDescriptionExamples
1. Determining Lethal OutcomesLethality is interpreted from performance during training or assessmentsEvaluating whether a shot is lethal from a photorealistic or bullseye target requires an assumption during the scoring process
2. Specific ScenarioAny data is limited by the scenario in which data collection occurredSpeed, accuracy and variance data collected from 7-m drills cannot be used to simulate performance in a 300-m engagement
3. Data GranularityData collection impacts simulation possibilities. Individual behaviors and transition states cannot be modeled without first collecting that informationShot times can incorporate many different behaviors dependent on whether the outcome is based on the first shot or subsequent shots. Also, reloading behaviors cannot be incorporated into simulation without a measure of reloading speed

Source(s): Table created by Adam Biggs and Joseph Hamilton

A summary overview of the basics for each Monte Carlo technique and its applications to warfighter lethality

TypeBasics of the TechniqueApplication to Warfighter Lethality
Monte Carlo simulationUsing a large number of simulations to determine the likelihood of potential outcomesUsing speed and accuracy sampling to determine whether a warfighter defeats an enemy during simulation
Markov Chain and Monte CarloSampling from a probability distribution using a series of transition states to model complex outcomes from a series of eventsSimulating combat from human performance by using multiple shots, weapon manipulations, and multiple shooters through multiple transition states between behaviors
Multilevel Monte CarloA Monte Carlo simulation that permits a trade-off in data fidelity. Samples with low cost and low accuracy are sampled at high volume, whereas samples with high cost and high accuracy are sampled at low volumeSamples can be incorporated based on difficulty to collect. Simple weapon proficiency performance can be collected at high volume from range-based performance, whereas room clearing during close quarters combat can be sampled at higher cost and with fewer samples
Kinetic Monte CarloAllow for growth and change as a system adapts to a certain environment or impacts the scenario. This technique can add scale and scope to the simulation that other techniques cannotThis technique can simulate combat at a more complex scale than squad-level engagements. Battlefield dynamics can be represented alongside warfighter lethality during simulation

Source(s): Table created by Adam Biggs and Joseph Hamilton

Note

1.

For a more complete discussion about the history of military simulation, see Hill, R. R., & Miller, J. O. (2017, December). A history of United States military simulation. In 2017 Winter Simulation Conference (WSC) (pp. 346–364). IEEE.

Disclaimer: The authors are military service members or employees of the US Government. This work was prepared as part of our official duties. Title 17, U.S.C. §105 provides that copyright protection under this title is not available for any work of the US Government. Title 17, U.S.C. §101 defines a US Government work as work prepared by a military service member or employee of the US Government as part of that person's official duties. The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Department of the Navy, Department of Defense, nor the US Government. The authors declare no financial or non-financial conflicts of interest.

References

Adams, H.E., Forrester, R.E., Kraft, J.F. and Oosterhout, B.B. (1961), CARMONETTE: A Computer-Played Combat Simulation, Technical Memorandum ORO-T-389, Operations Research Office, Johns Hopkins University, Baltimore, MD.

Arnold, U. and Yildiz, Ö. (2015), “Economic risk analysis of decentralized renewable energy infrastructures–A Monte Carlo Simulation approach”, Renewable Energy, Vol. 77, pp. 227-239.

Battaile, C.C. (2008), “The kinetic Monte Carlo method: foundation, implementation, and application”, Computer Methods in Applied Mechanics and Engineering, Vol. 197 Nos 41-42, pp. 3386-3398.

Berney, C. and Danuser, G. (2003), “FRET or no FRET: a quantitative comparison”, Biophysical Journal, Vol. 84 No. 6, pp. 3992-4010.

Biggs, A.T. and Hirsch, D.A. (2022), “Using Monte Carlo simulations to translate military and law enforcement training results to operational metrics”, The Journal of Defense Modeling and Simulation, Vol. 19 No. 3, pp. 403-415.

Biggs, A.T., Pistone, D., Riggenbach, M., Hamilton, J.A. and Blacker, K.J. (2021), “How unintentional cues can bias threat assessments during shoot/don't-shoot simulations”, Applied Ergonomics, Vol. 95, 103451.

Biggs, A.T., Huffman, G., Hamilton, J.A., Javes, K., Brookfield, J.S., Viggiani, A., Costa, J. and Markwald, R.R. (2023), “Small arms combat modeling: a superior way to evaluate marksmanship data”, Journal of Defense Analytics and Logistics, Vol. 7 No. 1, pp. 69-87.

Birta, L.G. and Arbez, G. (2013), Modelling and Simulation, Springer, London.

Blount, E.M., Ringleb, S.I., Tolk, A., Bailey, M. and Onate, J.A. (2013), “Incorporation of physical fitness in a tactical infantry simulation”, The Journal of Defense Modeling and Simulation, Vol. 10 No. 3, pp. 235-246.

Bonder, S. (2002), “Army operations research—historical perspectives and lessons learned”, Operations Research, Vol. 50 No. 1, pp. 25-34.

Bortz, A.B., Kalos, M.H. and Lebowitz, J.L. (1975), “A new algorithm for Monte Carlo simulation of Ising spin systems”, Journal of Computational Physics, Vol. 17 No. 1, pp. 10-18.

Brooks, S., Gelman, A., Jones, G. and Meng, X.L. (2011), in (Eds.), Handbook of Markov Chain Monte Carlo, CRC Press, Boca Raton, FL.

Burmaster, D.E. and Anderson, P.D. (1994), “Principles of good practice for the use of Monte Carlo techniques in human health and ecological risk assessments”, Risk Analysis, Vol. 14 No. 4, pp. 477-481.

Chib, S. and Greenberg, E. (1995), “Understanding the metropolis-hastings algorithm”, The American Statistician, Vol. 49 No. 4, pp. 327-335.

Chusilp, P., Charubhun, W. and Koanantachai, P. (2014), “Monte Carlo simulations of weapon effectiveness using Pk matrix and Carleton damage function”, International Journal of Applied Physics and Mathematics, Vol. 4 No. 4, p. 280.

Civil, I.D. and Schwab, C.W. (1988), “The Abbreviated Injury Scale, 1985 revision: a condensed chart for clinical use”, The Journal of Trauma, Vol. 28 No. 1, pp. 87-90.

Cliffe, K.A., Giles, M.B., Scheichl, R. and Teckentrup, A.L. (2011), “Multilevel Monte Carlo methods and applications to elliptic PDEs with random coefficients”, Computing and Visualization in Science, Vol. 14 No. 1, pp. 3-15.

Coolen-Schrijner, P. and Van Doorn, E.A. (2002), “The deviation matrix of a continuous-time Markov chain”, Probability in the Engineering and Informational Sciences, Vol. 16 No. 3, pp. 351-366.

Craig, B.A. and Sendi, P.P. (2002), “Estimation of the transition matrix of a discrete‐time Markov chain”, Health Economics, Vol. 11 No. 1, pp. 33-42.

Cuddy, J.S., Slivka, D.R., Hailes, W.S. and Ruby, B.C. (2011), “Factors of trainability and predictability associated with military physical fitness test success”, The Journal of Strength and Conditioning Research, Vol. 25 No. 12, pp. 3486-3494.

De Laquil, P. III. (1980), SABRES I: An Individual Resolution Small Arms Combat Simulation Model, NUREG/CR-0929, SAND79-8249, Sandia National Laboratories, Livermore.

Diaconis, P. (1996), “The cutoff phenomenon in finite Markov chains”, Proceedings of the National Academy of Sciences, Vol. 93 No. 4, pp. 1659-1664.

Domain, C., Becquart, C.S. and Malerba, L. (2004), “Simulation of radiation damage in Fe alloys: an object kinetic Monte Carlo approach”, Journal of Nuclear Materials, Vol. 335 No. 1, pp. 121-145.

Evans, L.C. (2012), An Introduction to Stochastic Differential Equations, American Mathematical Soc, Providence, RI, Vol. 82.

Ferson, S. (1996), “What Monte Carlo methods cannot do”, Human and Ecological Risk Assessment: An International Journal, Vol. 2 No. 4, pp. 990-1007.

Freeze, R.A. (1974), “An analysis of baseball batting order by Monte Carlo simulation”, Operations Research, Vol. 22 No. 4, pp. 728-735.

Gagniuc, P.A. (2017), Markov Chains: From Theory to Implementation and Experimentation, John Wiley & Sons, Hoboken, NJ.

Gass, S.I. and Assad, A.A. (2005), “Model world: tales from the time line—the definition of OR and the origins of Monte Carlo simulation”, Interfaces, Vol. 35 No. 5, pp. 429-435.

Gelfand, A.E. (2000), “Gibbs sampling”, Journal of the American statistical Association, Vol. 95 No. 452, pp. 1300-1304.

Geman, S. and Geman, D. (1984), “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, No. 6, pp. 721-741.

Geyer, C.J. (1992), “Practical Markov chain Monte Carlo”, Statistical Science, Vol. 7 No. 4, pp. 473-483.

Giles, M.B. (2008), “Multilevel Monte Carlo path simulation”, Operations Research, Vol. 56, pp. 607-617.

Giles, M.B. (2015), “Multilevel Monte Carlo methods”, Acta Numerica, Vol. 24, pp. 259-328.

Gilks, W.R., Richardson, S. and Spiegelhalter, D. (1995), in (Eds.), Markov Chain Monte Carlo in Practice, CRC Press, Boca Raton, FL.

Gillespie, D.T. (1976), “A general method for numerically simulating the stochastic time evolution of coupled chemical reactions”, Journal of Computational Physics, Vol. 22 No. 4, pp. 403-434.

Gilmer, G.H. and Bennema, P. (1972), “Simulation of crystal growth with surface diffusion”, Journal of Applied Physics, Vol. 43 No. 4, pp. 1347-1360.

Gobet, E., Lemor, J.P. and Warin, X. (2005), “A regression-based Monte Carlo method to solve backward stochastic differential equations”, The Annals of Applied Probability, Vol. 15 No. 3, pp. 2172-2202.

Gruber, M., Vurpillot, F., Bostel, A. and Deconihout, B. (2011), “Field evaporation: a kinetic Monte Carlo approach on the influence of temperature”, Surface Science, Vol. 605 No. 23-24, pp. 2025-2031.

Günal, M.M. and Pidd, M. (2010), “Discrete event simulation for performance modelling in health care: a review of the literature”, Journal of Simulation, Vol. 4, pp. 42-51.

Hastings, W.K. (1970), “Monte Carlo sampling methods using Markov chains and their applications”, Biometrika, Vol. 57 No. 1, pp. 97-109.

Heinrich, S. (2001), “Multilevel Monte Carlo methods”, International Conference on Large-Scale Scientific Computing, Springer, Berlin, Heidelberg, pp. 58-67.

Hu, X.J. and Wang, H.Y. (2013), “Effectiveness calculation of multiple rounds simultaneous impact shooting method based on Monte Carlo method”, Applied Mechanics and Materials, Vol. 397, pp. 2459-2463.

Kebaier, A. (2005), “Statistical Romberg extrapolation: a new variance reduction method and applications to option pricing”, The Annals of Applied Probability, Vol. 15 No. 4, pp. 2681-2705.

Kincheloe, W., Edwards, E., Klopcic, J.T., Walbert, J., Deitz, P., Reed, H., Jr., Hacker, W. and Bely, D. (2009), Fundamentals of Ground Combat System Ballistic Vulnerability/lethality, American Institute of Aeronautics and Astronautics.

Kotrla, M. (1996), “Numerical simulations in the theory of crystal growth”, Computer Physics Communications, Vol. 97 Nos 1-2, pp. 82-100.

Kress, M. (2012), “Modeling armed conflicts”, Science, Vol. 336 No. 6083, pp. 865-869.

Leblanc, B., Braunschweig, B., Toulhoat, H. and Lutton, E. (2003), “Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach”, Molecular Physics, Vol. 101 No. 22, pp. 3293-3308.

Lou, Y. and Christofides, P.D. (2003), “Estimation and control of surface roughness in thin film growth using kinetic Monte-Carlo models”, Chemical Engineering Science, Vol. 58 No. 14, pp. 3115-3129.

MacKenzie, E.J., Shapiro, S. and Eastham, J.N. (1985), “The abbreviated Injury scale and Injury severity score: levels of inter-and intrarater reliability”, Medical Care, Vol. 23 No. 6, pp. 823-835.

Meng, L., Shang, Y., Li, Q., Li, Y., Zhan, X., Shuai, Z., Kimber, R.G. and Walker, A.B. (2010), “Dynamic Monte Carlo simulation for highly efficient polymer blend photovoltaics”, The Journal of Physical Chemistry B, Vol. 114 No. 1, pp. 36-41.

Metropolis, N. and Ulam, S. (1949), “The Monte Carlo method”, Journal of the American Statistical Association, Vol. 44 No. 247, pp. 335-341.

Mihaylov, D.G. (2017), “One simple model of small arms fire using the Monte Carlo method”, The Journal of Defense Modeling and Simulation, Vol. 14 No. 4, pp. 465-470.

Min, B.J. (2016), “Application of Monte Carlo simulations to improve basketball shooting strategy”, Journal of the Korean Physical Society, Vol. 69 No. 7, pp. 1139-1143.

Misra, J. (1986), “Distributed discrete-event simulation”, ACM Computing Surveys (CSUR), Vol. 18 No. 1, pp. 39-65.

Monahan, R.H. and DuBois, E.L. (1979), An Assessment of Available Security System Simulations to Support the TNFS2 Program, SRI International, Menlo Park, CA.

Morse, P. and Kimball, G. (1951), Methods of Operations Research, MIT Technology Press/Wiley, New York.

Mun, J. (2006), Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis, Forecasting, and Optimization Techniques, John Wiley & Sons, Hoboken, NJ, Vol. 347.

NATO (2002), NATO Code of Best Practice for Command and Control Assessment, Command and Control Research Program, Washington, DC.

Nohel, J., Stodola, P., Flasar, Z. and Rybanský, M. (2022), “Multiple maneuver model of cooperating ground combat troops”, The Journal of Defense Modeling and Simulation, Vol. 20 No. 4, pp. 481-493, 15485129221078939.

Ormrod, D. and Turnbull, B. (2017), “Attrition rates and maneuver in agent-based simulation models”, The Journal of Defense Modeling and Simulation, Vol. 14 No. 3, pp. 257-272.

Palmer, C.S., Gabbe, B.J. and Cameron, P.A. (2016), “Defining major trauma using the 2008 abbreviated Injury scale”, Injury, Vol. 47 No. 1, pp. 109-115.

Rao, H.M., Smalt, C.J., Rodriguez, A., Wright, H.M., Mehta, D.D., Brattain, L.J., Edwards, H.M., Lammert, A., Heaton, K.J. and Quatieri, T.F. (2020), “Predicting cognitive load and operational performance in a simulated marksmanship task”, Frontiers in Human Neuroscience, Vol. 14, p. 222.

Roberts, G.O. (1996), “Markov chain concepts related to sampling algorithms”, Markov Chain Monte Carlo in Practice, Vol. 57, pp. 45-58.

Roy, T.C., Springer, B.A., McNulty, V. and Butler, N.L. (2010), “Physical fitness”, Military Medicine, Vol. 175 suppl_8, pp. 14-20.

Rubinstein, R.Y. and Kroese, D.P. (2016), Simulation and the Monte Carlo Method, John Wiley & Sons, Hoboken, NJ, Vol. 10.

Sadeghi, N., Fayek, A.R. and Pedrycz, W. (2010), “Fuzzy Monte Carlo simulation and risk assessment in construction”, Computer‐Aided Civil and Infrastructure Engineering, Vol. 25 No. 4, pp. 238-252.

Sanz, E. and Marenduzzo, D. (2010), “Dynamic Monte Carlo versus Brownian dynamics: a comparison for self-diffusion and crystallization in colloidal fluids”, The Journal of Chemical Physics, Vol. 132 No. 19, 194102.

Sawilowsky, S.S. (2003), “You think you’ve got trivials?”, Journal of Modern Applied Statistical Methods, Vol. 2 No. 1, p. 21.

Sawilowsky, S.S. and Fahoome, G.C. (2003), Statistics via Monte Carlo Simulation with Fortran, JMASM, Rochester Hills, MI.

Schulze, T.P. (2008), “Efficient kinetic Monte Carlo simulation”, Journal of Computational Physics, Vol. 227 No. 4, pp. 2455-2462.

Shi, X. and Zhang, S. (2020), “Research on complex system-of-systems combat experiment”, Journal of Physics: Conference Series, Vol. 1624 No. 2, 022073, IOP Publishing.

Smith, R.C. (2013), Uncertainty Quantification: Theory, Implementation, and Applications, Siam, Vol. 12.

Spedicato, G.A. (2017), “Discrete time Markov chains with R”, The R Journal, Vol. 9 No. 2, pp. 84-104.

Strickland, J. (2011), Mathematical Modeling of Warfare and Combat Phenomenon, Simulation Educators, Colorado Springs, CO.

Suchard, M.A., Weiss, R.E. and Sinsheimer, J.S. (2001), “Bayesian selection of continuous-time Markov chain evolutionary models”, Molecular Biology and Evolution, Vol. 18 No. 6, pp. 1001-1013.

Taylor, M.K., Markham, A.E., Reis, J.P., Padilla, G.A., Potterat, E.G., Drummond, S.P. and Mujica-Parodi, L.R. (2008), “Physical fitness influences stress reactions to extreme military training”, Military Medicine, Vol. 173 No. 8, pp. 738-742.

Tolk, A. (2019), “Tutorial on the engineering principles of combat modeling and distributed simulation”, 2019 Winter Simulation Conference (WSC), IEEE, pp. 18-32.

Voter, A.F. (2007), “Introduction to the kinetic Monte Carlo method”, in Radiation Effects in Solids, Springer, Dordrecht, pp. 1-23.

Whitesides, R. and Frenklach, M. (2010), “Detailed kinetic Monte Carlo simulations of graphene-edge growth”, The Journal of Physical Chemistry A, Vol. 114 No. 2, pp. 689-703.

Xie, W., Nelson, B.L. and Barton, R.R. (2014), “A Bayesian framework for quantifying uncertainty in stochastic simulation”, Operations Research, Vol. 62 No. 6, pp. 1439-1452.

Young, W.M. and Elcock, E.W. (1966), “Monte Carlo studies of vacancy migration in binary ordered alloys: i”, Proceedings of the Physical Society (1958-1967), Vol. 89 No. 3, p. 735.

Further reading

Hill, R.R. and Miller, J.O. (2017), “A history of United States military simulation”, 2017 Winter Simulation Conference (WSC), IEEE, pp. 346-364.

Corresponding author

Adam Biggs can be contacted at: adam.t.biggs@gmail.com

Related articles