Skip to main content
Log in

A game-theoretic approach for selecting optimal time-dependent thresholds for anomaly detection

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

Adversaries may cause significant damage to smart infrastructure using malicious attacks. To detect and mitigate these attacks before they can cause physical damage, operators can deploy anomaly detection systems (ADS), which can alarm operators to suspicious activities. However, detection thresholds of ADS need to be configured properly, as an oversensitive detector raises a prohibitively large number of false alarms, while an undersensitive detector may miss actual attacks. This is an especially challenging problem in dynamical environments, where the impact of attacks may significantly vary over time. Using a game-theoretic approach, we formulate the problem of computing optimal detection thresholds which minimize both the number of false alarms and the probability of missing actual attacks as a two-player Stackelberg security game. We provide an efficient dynamic programming-based algorithm for solving the game, thereby finding optimal detection thresholds. We analyze the performance of the proposed algorithm and show that its running time scales polynomially as the length of the time horizon of interest increases. In addition, we study the problem of finding optimal thresholds in the presence of both random faults and attacks. Finally, we evaluate our result using a case study of contamination attacks in water networks, and show that our optimal thresholds significantly outperform fixed thresholds that do not consider that the environment is dynamical.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. This work is a significant extension of our conference paper [11], which appeared at the 7th Conference on Decision and Game Theory for Security (GameSec 2016). The novel contributions are the following: (1) extended model that considers multiple attack types, which can be used to represent, for example, multiple targets within a system that an adversary may attack or multiple choices for the magnitude of the attack; (2) novel polynomial-time algorithms and theoretical analysis for finding optimal detector configurations against multiple attack types; (3) extended model that considers both intentional attacks and random faults (e.g., reliability failures that occur at random) and novel algorithms for finding optimal detection thresholds in the presence of both attacks and random faults; (4) comprehensive numerical results based on real-wold data and simulations, which study multiple attack types, random faults, sensitivity analysis, etc.

  2. We assume that \(\delta (\eta , \lambda )\) is left continuous to ensure that the optimal thresholds exist (see Definition 3). Without this assumption, the loss \(\mathcal {L}(\varvec{\eta }, k_a, \lambda )\) would have an infimum but not necessarily a minimum. Similarly, the maximum thresholds in Eq. (4) would not necessarily exist. Since these phenomena have virtually no practical relevance (in practice, values will typically be represented as floating-point numbers with limited precision), we introduce the mild assumption of left continuity for ease of presentation.

  3. Note that combined loss could be defined as a general linear combination of faults and attacks, i.e., \(\mathcal {P}_C = \alpha _F \cdot \mathcal {P}_F + \alpha \cdot \mathcal {P}\), where \(\alpha _F\) and \(\alpha \) are arbitrary constants. Our results can be extended trivially to cover this more general formulation by simply scaling the constants in our model up or down. For ease of presentation, we consider combined loss to be the average of faults and attacks.

  4. Studies on the response of water quality sensors to chemical and biological loads have shown that free chlorine, total organic carbon (TOC), electrical conductivity, and chloride are among the most reactive parameters to water contaminants [14].

  5. Note that in practice, \(\infty \) can be represented by a sufficiently high natural number.

  6. Note that in Algorithm 1, we store the minimizing values \(\eta ^*(n, \varvec{m})\) for every n and \(\varvec{m}\) when iterating backwards, thereby decreasing running time and simplifying the presentation of our algorithm.

References

  1. Alippi, C., & Roveri, M. (2006). An adaptive CUSUM-based test for signal change detection. In: Proceedings of the 2006 IEEE ISCAS, pp. 5752–5755

  2. Alpcan, T., & Basar, T. (2003). A game theoretic approach to decision and analysis in network intrusion detection. In: Proceedings of the 42nd IEEE Conference on Decision and Control (CDC), IEEE, vol 3, pp. 2595–2600

  3. Alpcan, T., & Başar, T. (2004). A game theoretic analysis of intrusion detection in access control systems. In: Proceedings of the 43rd IEEE Conference on Decision and Control (CDC), IEEE, vol 2, pp. 1568–1573

  4. Arad, J., et al. (2013). A dynamic thresholds scheme for contaminant event detection in water distribution systems. Water Research, 47, 1899–1908.

    Article  Google Scholar 

  5. Basseville, M., & Nikiforov, I. V. (1993). Detection of abrupt changes: Theory and application (Vol. 104). Englewood Cliffs: Prentice Hall.

    MATH  Google Scholar 

  6. CANARY. (2010). Canary: A water quality event detection tool. http://waterdata.usgs.gov/nwis/, [Online; Accessed October 20, 2016]

  7. Chandola, V., Banerjee, A., & Kumar, V. (2009). Anomaly detection: A survey. ACM Computing Surveys (CSUR), 41(3), 15.

    Article  Google Scholar 

  8. Deng, Y., Jiang, W., & Sadiq, R. (2011). Modeling contaminant intrusion in water distribution networks: A new similarity-based dst method. Expert Systems with Applications, 38(1), 571–578.

    Article  Google Scholar 

  9. Di Nardo, A., et al. (2013). Water network protection from intentional contamination by sectorization. Water Resources Management, 27(6), 1837–1850.

    Article  Google Scholar 

  10. Estiri, M., & Khademzadeh, A. (2010). A theoretical signaling game model for intrusion detection in wireless sensor networks. In: Proceedings of the 14th NETWORKS, IEEE, pp. 1–6

  11. Ghafouri, A., Abbas, W., Laszka, A., Vorobeychik, Y., & Koutsoukos, X. (2016). Optimal thresholds for anomaly-based intrusion detection in dynamical environments. In: Proceedings of the 7th Conference on Decision and Game Theory for Security (GameSec), pp. 415–434

  12. Gibbons, R. D. (1999). Use of combined shewhart-cusum control charts for ground water monitoring applications. Ground Water, 37(5), 682–691.

    Article  Google Scholar 

  13. Gleick, P. H. (2006). Water and terrorism. Water Policy, 8(6), 481–503.

    Article  Google Scholar 

  14. Hall, J., et al. (2007). On-line water quality parameters as indicators of distribution system contamination. Journal—American Water Works Association, 99(1), 66–77.

    Article  Google Scholar 

  15. Hart, D., et al. (2007). CANARY: A water quality event detection algorithm development tool. In: Proceedings of the World Environmental and Water Resources Congress

  16. Klise, K.A., & McKenna, S.A. (2006). Water quality change detection: Multivariate algorithms. In: Proceedings of the International Society for Optical Engineering, Defense and Security Symposium, International Society for Optics and Photonics

  17. Korzhyk, D., Yin, Z., Kiekintveld, C., Conitzer, V., & Tambe, M. (2011). Stackelberg vs. nash in security games: An extended investigation of interchangeability, equivalence, and uniqueness. Journal of Artificial Intelligence Research, 41, 297–327.

    Article  MathSciNet  MATH  Google Scholar 

  18. Laszka, A., Johnson, B., & Grossklags, J. (2013). Mitigating covert compromises: A game-theoretic model of targeted and non-targeted covert attacks. In: Proceedings of the 9th Conference on Web and Internet Economics (WINE), pp 319–332

  19. Luo, Y., Li, Z., & Wang, Z. (2009). Adaptive cusum control chart with variable sampling intervals. Computational Statistics & Data Analysis, 53, 2693–2701.

    Article  MathSciNet  MATH  Google Scholar 

  20. Mac Nally, R., & Hart, B. (1997). Use of cusum methods for water-quality monitoring in storages. Environmental Science & Technology, 31(7), 2114–2119.

    Article  Google Scholar 

  21. Mayer, P.W., et al. (1999). Residential end uses of water

  22. McKenna, S. A., Wilson, M., & Klise, K. A. (2008). Detecting changes in water quality data. Journal—American Water Works Association, 100(1), 74.

    Article  Google Scholar 

  23. Page, E. (1954). Continuous inspection schemes. Biometrika, 41(1/2), 100–115.

    Article  MathSciNet  MATH  Google Scholar 

  24. Paruchuri, P., Pearce, J.P., Marecki, J., Tambe, M., Ordonez, F., & Kraus, S. (2008). Playing games for security: An efficient exact algorithm for solving bayesian stackelberg games. In: International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, pp. 895–902

  25. Patcha, A., & Park, J.M. (2004). A game theoretic approach to modeling intrusion detection in mobile ad hoc networks. In: Proceedings of the 5th Annual IEEE SMC Information Assurance Workshop, IEEE, pp. 280–284

  26. Pawlick, J., Farhang, S., & Zhu, Q. (2015). Flip the cloud: Cyber-physical signaling games in the presence of advanced persistent threats. In: Proceedings of the 6th International Conference on Decision and Game Theory for Security (GameSec), Springer, pp. 289–308

  27. Pedregosa, F., et al. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825–2830.

    MathSciNet  MATH  Google Scholar 

  28. Perelman, L., et al. (2012). Event detection in water distribution systems from multivariate water quality time series. Environmental Science & Technology, 46, 8212–8219.

    Article  Google Scholar 

  29. Shen, S., Li, Y., Xu, H., & Cao, Q. (2011). Signaling game based strategy of intrusion detection in wireless sensor networks. Computers & Mathematics with Applications, 62(6), 2404–2416.

    Article  MathSciNet  MATH  Google Scholar 

  30. Tambe, M. (Ed.). (2011). Security and game theory: Algorithms, deployed systems, lessons learned. Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  31. Urbina, D.I., et al. (2016). Limiting the impact of stealthy attacks on industrial control systems. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, pp. 1092–1105

  32. Van Dijk, M., et al. (2013). Flipit: The game of “stealthy takeover”. Journal of Cryptology, 26(4), 655–713.

    Article  MathSciNet  MATH  Google Scholar 

  33. Verdier, G., et al. (2008). Adaptive threshold computation for cusum-type procedures in change detection and isolation problems. Computational Statistics & Data Analysis, 52(9), 4161–4174.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

Funding was provided by National Science Foundation (Grant No. CNS-1238959).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xenofon Koutsoukos.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proof of Theorem 1

Proof

Given an instance of the Stackelberg game, let \(\varvec{\eta }\) be optimal thresholds that do not necessarily satisfy the constraint of the lemma. Then, construct thresholds \(\varvec{\eta }^*\) that satisfy the constraint by replacing each \(\eta _k\) with \(\eta _k^* = \max _{\eta \,:\, \delta (\eta , \lambda ) \,\le \, \delta (\eta _k, \lambda )} \eta \). For any attack \((k_a, \lambda )\), the detection delay and hence the expected damage are the same for \(\varvec{\eta }\) and \(\varvec{\eta }^*\). Consequently, the damage caused by best-response attacks must also be the same for \(\varvec{\eta }\) and \(\varvec{\eta }^*\). Further, the defender’s costs for \(\varvec{\eta }\) are greater than or equal to those for \(\varvec{\eta }^*\) since 1) for every k, \(\eta _k \le \eta _k^*\) and \(F\!P\) is decreasing, and 2) the number of threshold changes in \(\varvec{\eta }\) is greater than or equal to that in \(\varvec{\eta }^*\). Therefore, \(\varvec{\eta }^*\) is optimal, which concludes our proof. \(\square \)

1.2 Proof of Lemma 1

Proof

We assume that we are given a damage bound P, and we have to find thresholds that minimize the total cost of false positives and threshold changes, subject to the constraint that any attack against these thresholds will result in at most P damage. In order to solve this problem, we use a dynamic-programming algorithm. We will first discuss the algorithm without a cost for changing thresholds, and then show how to extend it to consider costly threshold changes.

We let \(\varDelta ^{|\varLambda |}\) denote the Cartesian power \(\underbrace{\varDelta \times \varDelta \times \cdots \times \varDelta }_{|\varLambda |}\) of the set \(\varDelta \). For any two variables \(n \in \{1, \ldots , T\} \) and \(\varvec{m}\in \varDelta ^{|\varLambda |}\) such that \(\forall \lambda \in \varLambda : \, 0 \le m_\lambda < n\), we define \( \textsc {Cost} (n, \varvec{m})\) to be the minimum cost of false positives from n to T subject to the damage bound P, given that attacks of type \(\lambda \) can start at \(k_a \in \{n-m_\lambda , \ldots ,T\}\) and they are not detected prior to n. Formally, we can define \( \textsc {Cost} (n, \varvec{m})\) as

$$\begin{aligned} \min _{(\eta _n, \ldots , \eta _T)} \sum _{k = n}^T C_f \cdot F\!P(\eta _k) \end{aligned}$$
(16)

subject to

$$\begin{aligned}&\forall \lambda \in \varLambda , k_a \in \{n-m_\lambda , \ldots ,T\}:\nonumber \\&\quad \sum _{k = k_a}^{\displaystyle \min _{i \,:\, i \ge n \, \wedge \, \delta (\eta _i, \lambda ) \le i - k_a} i } \mathcal {D}(k, \lambda ) \le P . \end{aligned}$$
(17)

If there are no thresholds that satisfy the damage bound P under these conditions, we let \( \textsc {Cost} (n, \varvec{m})\) be \(\infty \).Footnote 5

We can recursively compute \( \textsc {Cost} (n, \varvec{m})\) as follows. Firstly, for any n and \(\varvec{m}\), if there exists an attack type \(\lambda \) such that \(\sum _{k = n - m_\lambda }^n \mathcal {D}(k, \lambda ) > P\), then an attack of type \(\lambda \) starting at time \(n - m_\lambda \) will cause greater than P damage, regardless of the thresholds \(\eta _n, \ldots , \eta _T\). Consequently, in this case, we can immediately set \( \textsc {Cost} (n, \varvec{m})\) to \(\infty \).

Otherwise, we iterate over all possible threshold values \(\eta \in E\), and choose the one that minimizes the cost \( \textsc {Cost} (n, \varvec{m})\). For any threshold \(\eta \), we can compute the resulting cost as follows. If \(\delta (\eta ,\lambda ) > m_\lambda \), then no attack of type \(\lambda \) would be detected at time n, so we would have to increase \(m_\lambda \) for the next timestep \(n + 1\). On the other hand, if \(\delta (\eta ,\lambda ) \le m_\lambda \), then attacks starting at time \(n - \delta (\eta ,\lambda )\) or earlier would be detected at time n, so we would have to decrease \(m_\lambda \) to \(\delta (\eta ,\lambda )\) for the next timestep \(n + 1\). Hence, if we selected threshold \(\eta \) for timestep n, then we would have to update \(\varvec{m}\) to \(\langle \min \{\delta (\eta , \lambda ), m_\lambda + 1\}\rangle _{\lambda \in \varLambda }\) for the next timestep. Therefore, if we selected threshold \(\eta \) for timestep n, then the attained cost would be the sum of the cost \(C_f \cdot F\!P(\eta )\) for timestep n and the best possible cost \( \textsc {Cost} (n + 1, \langle \min \{\delta (\eta , \lambda ), m_\lambda + 1\}\rangle _{\lambda \in \varLambda })\) for the remaining timesteps. By combining this formula with the rule for assigning infinite cost, we can compute \( \textsc {Cost} (n, \varvec{m})\) as

$$\begin{aligned}&\textsc {Cost} (n, \varvec{m})= {\left\{ \begin{array}{ll} \infty &{}\quad \text {if}\quad \! \bigvee _{\lambda \in \varLambda }\! \sum _{k=n - m_\lambda }^n \mathcal {D}(k, \lambda ) > P, \\ \begin{matrix} \min _{\eta } \textsc {Cost} (n+1, \langle \min \{\delta (\eta , \lambda ), \\ m_\lambda + 1\} \rangle _{\lambda \in \varLambda }) + C_f \cdot F\!P(\eta ) \end{matrix}&\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(18)

Note that in the equation above, \( \textsc {Cost} (n,\varvec{m})\) does not depend on \(\eta _1, \ldots , \eta _{n-1}\), it depends only on the feasible thresholds for the subsequent timesteps. Therefore, starting from the last timestep T and iterating backwards, we are able to compute \( \textsc {Cost} (n, \varvec{m})\) for all timesteps n and all values \(\varvec{m}\). Finally, for \(n = T\) and any \(\varvec{m}\), computing \( \textsc {Cost} (T, \varvec{m})\) is straightforward: if the damage from \(\varvec{m}\) does not exceed the threshold P for any attack type \(\lambda \), then \( \textsc {Cost} (T,\varvec{m})= \min _{\eta \in E} C_f \cdot F\!P(\eta )\); otherwise, \( \textsc {Cost} (T, \varvec{m}) = \infty \).

Having found \( \textsc {Cost} (n, \varvec{m})\) for all n and \(\varvec{m}\), by definition, \( \textsc {Cost} (1, \langle 0, \ldots , 0\rangle )\) is the minimum cost of false positives subject to the damage bound P. The minimizing threshold values can be recovered by iterating forward from \(n = 1\) to T and again using Eq. (18). That is, for every n, we select the threshold value \(\eta ^*_n\) that attains the minimum cost \( \textsc {Cost} (n, \varvec{m})\), where \(\varvec{m}\) can easily be computed from the preceding threshold values \(\eta ^*_1, \ldots , \eta ^*_{n-1}\).Footnote 6

Costly threshold changes    Now, we show how to extend the computation of \( \textsc {Cost} \) to consider the cost \(C_d\) of changing the threshold. Let \( \textsc {Cost} (n, \varvec{m}, \eta _{\text {prev}})\) be the minimum cost for timesteps starting from n subject to the same constraints as before but also given that the threshold value in timestep \(n - 1\) (i.e., the previous timestep) is \(\eta _{\text {prev}}\). Then, \( \textsc {Cost} (n, \varvec{m}, \eta _{\text {prev}})\) can be computed similarly to \( \textsc {Cost} (n, \varvec{m})\): for any \(n < T\), iterate over all possible threshold values \(\eta \), and choose the one that results in the lowest cost \( \textsc {Cost} (n, \varvec{m}, \eta _{\text {prev}})\). If \(\eta _{\text {prev}} = \eta \) or if \(n = 1\), then the cost is computed the same way as in the previous case [i.e., similar to Eq. (18)]. Otherwise, the cost also has to include the cost \(C_d\) of changing the threshold. Consequently, we first define

$$\begin{aligned}&S( n, \varvec{m}, \eta _{\text {prev}}, \eta ) \nonumber \\&\quad ={\left\{ \begin{array}{ll} \textsc {Cost} ( n+1, \langle \min \{\delta (\eta , \lambda ) , m_\lambda + 1\}\rangle _{\lambda \in \varLambda }) + C_f \cdot F\!P(\eta ) &{}\quad \text {if}\quad \eta \in \{ \eta _{\text {prev}}, 1\}, \\ \textsc {Cost} (n+1, \langle \min \{\delta (\eta , \lambda ), m_\lambda + 1\}\rangle _{\lambda \in \varLambda }) + C_f \cdot F\!P(\eta ) + C_d &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(19)

Then, similar to Eq. (18), we can express the optimal cost as

$$\begin{aligned} \textsc {Cost} (n, \varvec{m}, \eta _{\text {prev}})= {\left\{ \begin{array}{ll} \infty &{}\quad \text {if} \bigvee _{\lambda \in \varLambda } \sum _{k=n - m_\lambda }^n \mathcal {D}(k, \lambda ) > P , \\ \min _{\eta } S(n, \varvec{m}, \eta _{\text {prev}}, \eta ) &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(20)

Note that for \(n = 1\), we do not add the cost \(C_d\) of changing the threshold. Similarly to the previous case, \( \textsc {Cost} (1, 0, \text {arbitrary})\) is the minimum cost subject to the damage bound P, and the minimizing thresholds can be recovered by iterating forward. \(\square \)

1.3 Proof of Theorem 2

Proof

For any damage bound P, using the algorithm \( \textsc {MinimumCostThresholds} \) (Algorithm 1), we can find thresholds that minimize the total cost of false positives and threshold changes, which we will denote by TC(P), subject to the constraint that an attack can cause at most P damage. Since the defender’s loss is the sum of its total cost and the damage resulting from a best-response attack, we can find optimal thresholds by solving

$$\begin{aligned} \min _P \; TC(P) + P \end{aligned}$$
(21)

and computing the optimal thresholds \(\varvec{\eta }^*\) for the minimizing \(P^*\) using our dynamic-programming algorithm.

To show that this formulation does indeed solve the problem of finding optimal thresholds, we use indirect proof. For the sake of contradiction, suppose that there exist thresholds \(\varvec{\eta }'\) for which the defender’s loss \(L'\) is lower than the loss \(L^*\) for the solution \(\varvec{\eta }^*\) of the above formulation. Let \(P'\) be the damage resulting from the attacker’s best-response against \(\varvec{\eta }'\), and let \(TC'\) be the defender’s total cost for \(\varvec{\eta }'\). Since the best-response attack against \(\varvec{\eta }'\) achieves at most \(P'\) damage, we have from the definition of TC(P) that \(TC' \ge TC(P')\). It also follows from the definition of TC(P) that \(L^* \le TC(P^*) + P^*\). Combining the above with our supposition \(L^* > L'\), we get

$$\begin{aligned} TC(P^*) + P^* \ge L^* > L' = TC' + P' \ge TC(P') + P' . \end{aligned}$$

However, this is a contradiction since \(P^*\) minimizes \(TC(P) + P\) by definition. Therefore, thresholds \(\varvec{\eta }^*\) must be optimal.

It remains to show that Algorithm 2 finds an optimal damage bound \(P^*\). To this end, we show that \(P^*\) can be found using an exhaustive search over a set, whose cardinality is polynomial in the size of the problem instance. Consider the set of damage values resulting from all possible attack scenarios \(k_a \in T\), \(\delta \in \varDelta \), \(\lambda \in \varLambda \), that is, the set

$$\begin{aligned} \left\{ \sum _{k = k_a}^{k_a + \delta } \mathcal {D}(\lambda , k) \, \big |\, \exists \, k_a \in \{1, \ldots , T\},\, \delta \in \varDelta , \, \lambda \in \varLambda \right\} . \end{aligned}$$
(22)

Let the elements of this set be denoted by \(P_1, P_2,\ldots \) in increasing order. It is easy to see that for any i, the set of thresholds that satisfy the damage constraint is the same for every damage value \(P \in [P_i, P_{i+1})\). Hence, for any i, the cost TC(P) is the same for every \(P \in [P_i, P_{i+1})\). Therefore, the optimal \(P^*\) must be a damage value \(P_i\) from the above set, which we can find by simply iterating over the set. \(\square \)

1.4 Proof of Proposition 1

Proof

In the dynamic-programming algorithm (Algorithm 1), we first compute \( \textsc {Cost} (n,\varvec{m},\)\(\delta _{n-1})\) for every \(n\in \{1,\ldots ,T\}\), \(\varvec{m}\in \varDelta ^{|\varLambda |}\), and \(\eta _{\text {prev}} \in E\), and each computation takes \(\mathcal {O}(|E| \cdot |\varLambda |)\) time. Then, we recover the optimal detection delay for all timesteps \(\{1,\ldots ,T\}\), and the computation for each timestep takes a constant time. Consequently, the running time of the dynamic-programming algorithm is \(\mathcal {O}(T \cdot |\varDelta |^{|\varLambda | + 1} \cdot |\varLambda | \cdot |E|)\).

In the exhaustive search, we first enumerate all possible damage values by iterating over all possible attacks \((k_a, \delta , \lambda )\), where \(k_a \in \{1, \ldots , T\}\), \(\delta \in \varDelta \), and \(\lambda \in \varLambda \). Then, for each possible damage value, we execute the dynamic-programming algorithm, which takes \(\mathcal {O}(T \cdot |\varDelta |^{|\varLambda | + 1} \cdot |\varLambda | \cdot |E|)\) time. Consequently, the running time of Algorithm 2 is \(\mathcal {O}(T^2 \cdot |\varDelta |^{|\varLambda | + 2} \cdot |\varLambda |^2 \cdot |E|)\). \(\square \)

1.5 Algorithm 3 and Proof of Proposition 2

figure c

Proof

The obtained threshold is optimal since the algorithm evaluates all possible solutions through exhaustive search. Given a tuple \((\eta ,k_a,\lambda )\), when computing the attacker’s payoff \(\mathcal {P}(\eta ,k_a,\lambda )\), we use the payoff computed in previous iteration, which takes constant time. We repeat these steps for each attack type \(\lambda \in \varLambda \). Therefore, the running time of the algorithm is \(\mathcal {O}(T\cdot |E|\cdot |\varLambda |)\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ghafouri, A., Laszka, A., Abbas, W. et al. A game-theoretic approach for selecting optimal time-dependent thresholds for anomaly detection. Auton Agent Multi-Agent Syst 33, 430–456 (2019). https://doi.org/10.1007/s10458-019-09412-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-019-09412-2

Keywords

Navigation