Critical infrastructure protection under imperfect attacker perception

https://doi.org/10.1016/j.ijcip.2009.10.002Get rights and content

Abstract

This paper considers the problem of allocating finite resources among the elements of a critical infrastructure system in order to protect it from antagonistic attacks. Previous studies have assumed that the attacker has complete information about the utilities associated with attacks on each element. In reality, it is likely that the attacker’s perception of the system is not as precise as the defender’s, due to geographical separation from the system, secrecy, surveillance, complex system properties, etc. As a result, the attacker’s actions may not be those anticipated under the assumption of complete information. We present a modeling framework that incorporates imperfect attacker perception by introducing random observation errors in a previously studied baseline model. We analyze how the perceptive ability affects the attack probabilities and the defender’s disutility and optimal resource allocation. We show for example that the optimal resource allocation may differ significantly from the baseline model, that a less perceptive attacker may cause greater disutility for the defender, and that increasing the investment in an element can increase the expected disutility even in a zero-sum situation.

Introduction

Critical infrastructures are technical systems utilized to distribute energy, information, water, goods and people, and are of the utmost importance for the quality of everyday life. A major disturbance in the flow of services provided by the critical infrastructures can constitute a severe strain on business, government and society in general. This paper emphasizes the security aspects of the risk analysis of large technical systems, more specifically the threat from qualified antagonists. This is a broad category of threats that spans from insiders and saboteurs, to crime syndicates and transnational terrorist organizations. The purpose of an attack can be to cause severe damage to a technical system in an attempt to disable important functions of a society. However, the goal can also be to make a symbolic demonstration, or to cause a large enough consequence in order to achieve a psychological effect such as a spread of fear and anxiety [1]. Critical infrastructures have been targeted in several previous terrorist attacks and they continue to constitute some of the most likely targets for potential future attacks. Consequently, protecting these systems is a prioritized issue in many countries, not least in the United States [2], [3].

Intentional attacks are different from random failures in the sense that the antagonist intentionally chooses the time and place for the attack. Furthermore, the measures applied to protect a system will, most likely, affect the antagonist’s course of action. Changes in how the defender perceives that the opponent will act will, in turn, affect how the defense is allocated, which once more can affect the antagonist’s behavior, and so on. Hence, there is a strategic interaction between the attacker and the defender, and studies of antagonistic threats embrace, as many before have pointed out, a “game” situation rather than a static decision situation [4], [5], [6].

A number of papers have studied various theoretical aspects of protecting potential targets against antagonistic attacks. Underlying most if not all of the studies is the assumption of a rational and informed attacker [7]. That is, given a set of alternatives such as different potential targets, the attacker associates a certain utility with each alternative for any protective measures taken by the defender, and will choose an alternative that yields the highest utility.

One branch of research focuses on how a single defender should allocate protective measures among targets to minimize the losses due to subsequent attacks. Some papers analyze the problem where an attacker chooses a given number of facilities to disable in order to maximize some measure of damage, such as the transportation costs between the remaining facilities. Prior to the attack, a defender chooses a given number of facilities to fortify in order to minimize the same objective function [8], [9]. Other studies assume that an attacker chooses one target to attack, possibly randomizing, and the probability that the attack is successful is determined by the resources previously invested in the target by a defender [4], [5], [10], [11]. It has been shown under quite general assumptions that, given a limited resource budget, the optimal resource allocation for the defender is to minimize the attacker’s maximum expected utility of an attack [4], [10]. Other papers model the decision variables of both the defender and the attacker as continuous levels of effort for each target, which together determine the probability that the target is disabled [12], [13].

A related line of research has considered situations where multiple agents protect private targets, and the externalities associated with such distributed decision-making [5], [6], [14], [15], [16], [17]. Situations that have been analyzed include the case where two countries can invest in protection against a terrorist that will choose to attack either country or not attack at all [6]. The authors show that the substitution effect, in which an investment by one country increases the probability that the other is attacked, leads to overinvestments if the countries obey their own self-interests.

A number of papers have considered that the defender may be uncertain about the preferences of the attacker. Some authors model the uncertainty as a probability distribution across possible attacker types [4], [5]; the problem is explicitly framed as a Bayesian game (see, e.g., [18]) in [5]. In this setting, the problem for the defender entails considering the expected disutility across the attacker types associated with each possible allocation. A multinomial logit model (see, e.g., [19]) has been used to represent the probabilities of a set of targets being attacked considering the defender’s uncertainty about the attacker’s preferences [15]. Another approach to handling uncertainty about the nature of the attack when evaluating different protection strategies has been to compare them under a set of plausible scenarios [11].

Less attention has been given to the decision process of the attacker. One paper considers a game where the attacker, prior to the defender’s resource allocation, is uncertain about the level of vulnerability of one of the targets [20]. After observing the defender’s investments, the attacker updates his beliefs about the vulnerability via Bayes’ rule, taking the rationality of the defender into account; the outcome is a perfect Bayesian equilibrium (see, e.g., [18]). This model incorporates attacker uncertainty prior to the defender’s actions, but still assumes that the attacker observes these actions perfectly. It also assumes that the attacker uses quite sophisticated reasoning (specifically, Bayes’ rule) to arrive at its response.

It is unlikely in reality that an antagonist could perceive the precise gains associated with attacking a target as accurately as the defender of the system, due to factors such as undisclosed information, surveillance, complex system structure and geographical separation between the antagonist and the target.1 Therefore, its subsequent actions may not necessarily be those that would yield the highest utility given all the information about the system that the defender possesses. This can be expected to have significant impacts on how the defender should allocate resources to protect the targets. For example, given a limited defense budget, assuming a perfectly perceptive attacker may mean that some elements should be left unprotected since they would be suboptimal alternatives for the attacker. However, if there is a chance that the attacker chooses a suboptimal alternative, this strategy could be very dangerous if attacks on some of these elements would cause high disutility for the defender (see Section 2.2).2

In this paper we analyze the impacts if the attacker cannot perfectly observe the utilities associated with attacking elements of an infrastructure system. In particular, we examine the implications for the defender’s problem of allocating a limited resource budget in order to protect the system against attack. As a baseline we use the model of Powell [4], which assumes complete information and perfect perception on behalf of both actors. We also include a non-attack option for the antagonist. The observation errors are modeled as random variables whose outcomes are not observed by the defender. In effect, the actions chosen by the attacker become probabilistic from the defender’s point of view.

The model is formulated generically so that the results are applicable to most critical infrastructure systems exposed to physical attacks. We consider a system consisting of elements, or components, which are potential targets of attacks and possible to protect individually. How elements, protection resources and utilities/disutilities associated with attacks should be defined for real systems depends on the focus and scale of the analysis. When applied to, for example, a railway system, the elements may constitute stations, trains and tracks, while protection may involve camera surveillance and guards as well as the design of platforms and vehicles. The relevant impacts of the attack, in addition to potential casualties, induced fear, symbolic value, etc., may include disruptions of traffic and delays. When applied to electric power grids, the elements may be power plants, substations and lines, and protection may also involve physical barriers, devices for detecting hazardous materials, authorization checks, etc. [11]. Relevant impacts may include the number of people who suffer from blackouts or the total energy loss.

In the analysis we highlight a number of important implications of imperfect attacker perception that are not present in the baseline model. For example, an unwise defense investment can increase the expected disutility of the defender even in a zero-sum situation. Also, a less perceptive attacker can yield higher expected disutility for the defender even if resources are allocated optimally, which is in contrast to the notion that a perfectly informed attacker represents a worst-case scenario. In general, the optimal allocation of resources can vary significantly depending on the attacker’s perceptive ability. The results have important implications for critical infrastructure protection. If, for a given system, there are reasons to believe that an attacker would not have perfect perception, then we should adapt to that situation by extending the protection to those facilities that a less perceptive attacker is likely to target. Depending on how easily different targets are protected and the available resource budget, a significant redistribution of resources may be necessary.

The baseline model is described and analyzed in Section 2. In Section 3 we generalize the model by introducing imperfect perceptive ability of the attacker. The properties of this modeling framework are analyzed in Section 4 with a numerical example given in Section 5. In Section 6, we extend the model by combining the attacker’s imperfect observations with the defender’s uncertainty about attacker types. We conclude the paper in Section 7.

Section snippets

Model formulation

In this section we introduce the baseline model with perfect observations, which is then generalized in Section 3. This model is similar to the game studied in detail by Powell [4], [10], to which we refer the reader for a rigorous treatment. The only essential difference is the presence of the non-attack alternative, which does not change the analysis in any significant way.

We consider a system consisting of n elements, indexed by i=1,,n. Each element represents a component of an

Formulation of the attacker’s problem

We assume now that due to an imperfect ability to assess the defender’s protective measures and the outcome of a successful attack, the attacker’s observations of the elements are associated with errors. These errors can be associated with the success probabilities pi as well as the success utilities vis, although we only consider their combined effect. Instead of observing the true utility vi=pivis for each element, the antagonist’s observation or best guess based on its available information

Attack probabilities

We first examine the effects of the attacker’s level of perception λ on attack probabilities. Given a small increase in the attacker’s perception λ while keeping the resource allocation fixed, the change in the conditional attack probability qi|A of an arbitrary element i is qi|Aλ=ln(viv̄A)qi|A,i=1,,n, where v̄A is the geometric mean utility conditional on an attack, v̄A=j1vjqj|A. Thus, the conditional attack probability will increase or decrease depending on whether the attack utility vi

Setting

In the following, some of the general results from Section 4 are illustrated with a small example system consisting of n=3 elements. We study a case where the defender’s and the attacker’s valuations of the elements differ (i.e., not a zero-sum situation). More precisely, we assume that d1s=0.2, d2s=0.45 and d3s=1, while v1s=1, v2s=0.45 and v3s=0.2. Their valuations of the non-attack alternative are d0=0.3 and v0=0.3, respectively.

Further, we assume that the elements are equally easy to

Representing defender uncertainty

In order not to obscure the main ideas of the paper, we have assumed that the defender has complete information about the attacker’s characteristics, except for the actual outcomes ui of the attacker’s observed utilities. It is straightforward to extend the analysis to situations where the defender is uncertain about the preferences and/or the perceptive ability of the attacker. Uncertainty about preferences can stem from limited knowledge about the attacker’s motives and available resources,

Conclusion

In the paper, we have considered the problem of allocating finite resources for protecting a critical infrastructure system against antagonistic attacks when the attacker’s observations are imperfect. The rationale of this analysis is that factors such as secrecy, opacity, surveillance and remoteness make it unlikely that the attacker is capable of predicting its true risks and benefits of attacking different elements of the system. Hence, a protection strategy that is based on the assumption

Acknowledgements

The paper has benefited greatly from discussions with Lars-Göran Mattsson and comments from three anonymous referees. The first author wishes to thank the Swedish Governmental Agency for Innovation Systems (Vinnova) for the financial support.

References (24)

  • G. Brown et al.

    Defending critical infrastructure

    Interfaces

    (2006)
  • R.L. Church et al.

    Protecting critical assets: The r-interdiction median problem with fortification

    Geographical Analysis

    (2007)
  • Cited by (47)

    • A review of attacker-defender games: Current state and paths forward

      2024, European Journal of Operational Research
    View all citing articles on Scopus
    View full text