Abstract
We will analyze the importance of elements of a complex structure on the availability of the system. The basis for the element assessment are the importance measures for multi-state systems introduced by Birnbaum (in: Krishaiah, Econometrics, principal components, reliability, and applications, Academic Press, New York, 1969) and Barlow and Proshan (Stoch Process Appl 3:153–173, 1975). The availability depends not only on the reliability, but also on the difficulty of maintenance, the ability to diagnose the need for service and its efficient implementation. If we assume that the need for maintenance is the result of deregulation, then determining the key elements to detect the moment of disorder of the system will be the basis for assessing the importance of the element for the system maintenance process.
Similar content being viewed by others
1 Introduction
The purpose of this work is to match mathematical tools and methods that allow to analyze threats in complex systems that cause their dysfunctions. In the existing literature on the subject of the mathematical modeling of systems and their analysis, the concept of availability appears. Functionality can also be quantified and one can talk about the reduction of functionality. The very concept of reliability also gives the possibility of subtle treatment. Based on various observations in this area, a mathematical model was created to measure the level of safety associated with its operation. You can consider the internal security of the system itself, as well as examine the system’s impact on the environment. If we talk about system availability, we mean its ability to meet the expectations of designers. Keeping such a system ready is associated with launching an effective diagnostic process and a maintenance plan. The presented considerations are aimed at analyzing models of system reliability that will allow for the rationalization of maintenance procedures (repair, maintenance, diagnostics). We do not meet the needs, we only assume the limited resources as a paradigm that allows to decide locally what subsystem is responsible for the increased chance of losing the availability of the system functionality.
The availability and its control depends on various factors. It is not only reliability, but also the difficulty of maintenance, the ability to diagnose components which need service and an efficient implementation of reparation tasks. If we assume that the need for maintenance is the result of disorder, then determining the key elements responsible for the event is crucial work. Assessing the importance of the element for the system disorders is very helpful in the disorder location. The proposed procedure is based on the estimation of the disorder moment for the complex system (cf. Szajowski 2011, 2015). The location of the elements responsible for the disorder is proposed on the basis of the responsibility measure (the Barlow and Proschan’s importance measure).
To implement the presented goal, we will use methods of detecting a change point in the behavior of system elements. This aspect is described in Sect. 2. A significant support in these considerations is provided by the methods of the cooperative game theory presented in Appendix A and the selected antagonistic model with stopping stochastic processes which is described in Appendix B. Linking the observation of sensors with the global objectives of the analyzed system together with rational guidelines for the sensor center we discuss in Sect. 4. Before Sect. 4 there is a discussion of the important analogies of the cooperative game theory to structural reliability, which is in Sect. 3.
As a conclusion, the main objective of this article is choosing the appropriate moment to service the system [(cf. Ramamurthy (1990)] and to locate the source of disorder [(cf. (Ramamurthy (1990), Chap. 3, Middleton 1968].
2 Disorder of structure vs. disorder of components.
Classic reliability is life time, or rather life time distribution. This approach applied to a complex system naturally prompts you to analyze the relationship between the reliability of components and the entire system. Let \(\mathbf{N} =\{1,2,\ldots ,{\mathfrak {n}}\}\) be indexes of the elements. The mathematical model assumes binary elements and systems, i.e. the possible states are “functional” or “damaged”. It is universal in the sense that it uses the functions defined in the state space of elements \({{\mathbb {B}}}^n\) and the state space \({\mathbb {B}}\) of the system, where \({{\mathbb {B}}}= \{0,1\}\), and \( n=|\mathbf{N} |\) is the number of components of the system. The analysis of the change in the state of reliability is the study of changes in the distribution of life time, or rather the remaining life time. It consists of detecting events that are not observable, but detectable on the basis of symptoms, i.e. secondary phenomena, whose cause-effect relationship with life time is known. Establishing relationships involves determining the deterministic relationship between the reliability of elements and structure. Because the state space is described as binary vectors, and the state of the system is also binary, the structure models are binary functions in the state space [cf. Moretti and Patrone (2008)]. The state of the system and the components are considered at a fixed moment of time. It is assumed that the state of the system depends on the state of components only. We shall distinguish between two states only - when a component is disordered or not. A state of the system has also such dichotomies value. A similar approach is proposed for the reliability analysis by Mine (1959). We will analyze the importance of elements of a complex structure for the disorder (availability) of the system. The basis for the structural element assessment are the importance measures for multi-component systems introduced by Birnbaum (1969). Barlow and Proschan (1975) extended the investigation on the reliability element assessment by the importance measures taking into account the reliability of the elements.
Definition 1
(Structure function) Let us consider the dichotomous elements having states from the set \({{\mathbb {B}}}=\{0,1\}\). The structure function is \(f: {{\mathbb {B}}}^n\rightarrow {{\mathbb {B}}}\).
To indicate the state of the \({\mathfrak {i}}\)th component we assign a binary indicator variable \(x_{\mathfrak {i}}\) to component \({\mathfrak {i}}\). There are two states of the component: \(\mathbf{F} =``\text {Failed}''\) or \(\mathbf{W} =``\text {Working}''\). We have \(x_{\mathfrak {i}}={{\mathbb {I}}}_\mathbf{W }(s)\) where \(s\in \{\mathbf {W},\mathbf {F}\}\) is the current state of the component. The state of the structure is the value of the structure function on the states of the components. For any \(\mathbf {x}\in {\mathbb {B}}^n\) denote
Definition 2
(Irrelevant component) Let f be a structure on \(\mathbf{N} \) and \({\mathfrak {i}}\in \mathbf{N} {}\). The element \({\mathfrak {i}}\) is irrelevant to the structure f if \(f(1,\mathbf {x}_{-i})=f(0,\mathbf {x}_{-i})\) for all \(\mathbf {x}_{-{\mathfrak {i}}}\in {\mathbb {B}}^{n-1}\).
Let g and h be two structures on \(\mathbf{N} \). The linear composition of these two structures is a structure f on \(\mathbf{N} \cup \{{\mathfrak {n}}+1\}\) defined on \({\mathbb {B}}^{{\mathfrak {n}}+1}\) by
Let f be the structure on \(\mathbf{N} \). Its dual \(f^D\) is another structure on \(\mathbf{N} \) defined on \({\mathbb {B}}^n\) by
where \(\mathbf {1}=(1,1,\ldots ,1)\).
Definition 3
(Monotone structure function) Let f be a structure on \(\mathbf{N} \) and \({\mathbb {B}}^n\) the vector space with the partial order “\(\le \)”.Footnote 1 The structure function f is monotone if for any \(\mathbf {x},\mathbf {y}\in {\mathbb {B}}^n\)
Definition 4
(Coherent structure (function)) A monotone structure f on \(\mathbf{N} \) is called semi-coherent if \(f(\mathbf {0})=0\) and \(f(\mathbf {1})=1\). A semi-coherent structure (function) f is called coherent if all component in \(\mathbf{N} \) are relevant to f.
Let us denote \({\mathcal {N}}=\{{\mathfrak {J}}:{\mathfrak {J}}\subset \mathbf{N} \}\) – the family of all subsets of \(\mathbf{N} \). The elements of \({\mathcal {N}}\) are called subsystems. The vector \(\mathbf {x}^{\mathfrak {J}}\in {\mathbb {B}}^{|{\mathfrak {J}}|}\) represents the states of components in the set \({\mathfrak {J}}\). For \({\mathfrak {A}},{\mathfrak {B}},{\mathfrak {C}}\in {\mathcal {N}}\), the disjoint decomposition of \(\mathbf{N} \), the vector \((\mathbf {1}^{\mathfrak {A}},\mathbf {0}^{\mathfrak {B}},\mathbf {x}^{\mathfrak {C}})\in {\mathbb {B}}^n\), with elements arranged in the proper order, represents the situation where all the components in the subsystem \({\mathfrak {A}}({\mathfrak {B}})\) are in the working(failed) state and the states of the components in \({\mathfrak {C}}\in {\mathcal {N}}\) are as specified by the binary vector \(\mathbf {x}^{\mathfrak {C}}\).
Let f be a structure on \(\mathbf{N} \), \({\mathfrak {A}}\subset \mathbf{N} \) and \({\mathfrak {J}}=\mathbf{N} \setminus {\mathfrak {A}}\). The subset \({\mathfrak {A}}\) of \(\mathbf{N} \) is called path (cut) of the structure f if \(f(\mathbf {1}^{\mathfrak {A}},\mathbf {0}^{\mathfrak {J}})=1\) (\(f(\mathbf {0}^{\mathfrak {A}},\mathbf {1}^{\mathfrak {J}})=1\)). Let f be the structure on \(\mathbf{N} \). A path (cut) set \({\mathfrak {C}}\) of f is called a minimal path (cut) set of f if \({\mathfrak {A}}\subset {\mathfrak {C}}\) implies that \({\mathfrak {A}}\) is not a path (cut) set of f.
3 Structure vs. simple game
You can treat selected structure components as players in a cooperative game. Their role in the structure is reduced to monitoring the assigned area. The smooth operation of the monitored subsystem requires communication to the management center of the entire message system: I have no operational problems in the supervised area or the area observed has ceased to perform its functions. A complex system, observed in the described way from the selected elements, can be treated as players with a common goal for our purposes. Their purpose is to detect threats and signal the observed anomaly to the management center, so that there are not too many false alarms, and the detection of critical anomalies is effective. One of the elements leading to the description of such a system is to determine how to reward observers so that the reward for the correct and rapid detection of disturbance gives a signal that is important, but excessive sensitivity and frequent signal overinterpretation are punished. These premises create basically an antagonistic game with elements of cooperation [cf. Carpente et al. (2005)]. In dynamic systems, which are social or technical systems, the given goals are implemented by appropriately selected strategies (controls). The person making the decision on signaling the risk de facto decides to launch the examination or a repair procedure. If the goal is to determine the point of change (deregulation), then Markov moments are natural strategies. In the next stages of model construction, methods of system description and change point detection will be combined. The elements of observer cooperation are modeled by cooperative games, including simple games.
Roughly speaking, a complex system, observed from selected elements, which we can treat as players with a common goal—detecting the threat of signaling the observed anomaly so that there are no false alarms, and the detection is effective. In addition, each observer is rewarded for the correct and quick detection. Elements of the observer cooperation are modeled by simple game methods.
3.1 Voting decision
Suppose the selected places are closely observed in the system. At each such point we collect information, and in the end we also want to be able to assess the significance of this information and make decisions about sending warning signals. The decision on the state of the entire system is made in the center, where the received signals from equal points are synthesized into one decision or an assessment of whether the system as a whole is operational or does not perform the assumed functions. The specificity of the analyzed system and its importance requires that detection be misaligned, but without false alarms. The structure of the collective decision is based on democratic principles, i.e. the signals sent from observation centers are treated as votes from experts, and the rules for taking these votes into account are governed by the rules of simple games (see Appendix A).
Let us describe the complex system monitored by sensors. There are various signals in the system which model the state or information about the state of the nodes. Let us describe them by a process \(\{\overrightarrow{X}_n,n\in \overline{{\mathbb {N}}}\}\), \(\overline{{\mathbb {N}}}=\{0\}\cup {{\mathbb {N}}}\), defined on \((\varOmega ,{{\mathcal {F}}},\mathbf{P})\). The process is observed sequentially and it delivers knowledge about the state of each sensor, e.g.rth (gets some of its coordinates from the vector \(\overrightarrow{X}_n\) at moment n). For further analysis it is assumed that the processes observed at nodes have the Markov structure given the random moment \(\theta _r\)—the moments of transition probabilities change. Different transition probabilities correspond to different states of the analyzed area. When constructing a mathematical model of a complex system, we determine what is the desired dynamics of the observation and what is anomalous. We assume that the change in the system dynamics from expected to undesirable occurs in an instant that is unknown. We only know its typical probabilistic properties. The goal is to determine when the system as a whole is disordered. The system state describes the structure function on the basis of the state of individual subsystems. However, the true state of the subsystem is not known directly, but only the values of the observable components of the vector \( \overrightarrow{X}_{n} \) are read(observed) by the sensors. Each sensor is responsible for measuring certain components. The sensor’s secondary, integral function is to evaluate the observation and send a signal—the decision on the requested state of the subsystem being in the area of direct sensor supervision. An important subject of the findings of the system analysis is the rational way of transformation the observation available on the sensor for its dichotomous decision.
The principles of the construction of models with a change (disorder) of signals indicating changes in the studied area, which we use in this approach are known in works by Shiryaev (1961).Footnote 2 Various modifications and generalizations of the problem formulated in this way are the subject of the work by Brodsky and Darkhovsky (1993), Bojdecki (1979), Yoshida (1983), Szajowski (1992). The detection of disorders with given precision [cf. Sarnowski and Szajowski (2011)] is most appropriate approach to adopt for the problem under consideration.
The formulation of the model needs filtration (aggregated knowledge about the system history) and a priori distribution of disorders. \(\{\overrightarrow{X}_n\}_{n \in {{\mathbb {N}}}}\) are consistent with the filtration \({\mathcal {F}}_n\) and the vectors \(\overrightarrow{X}_n:\varOmega \rightarrow {{\mathbb {E}}}\), where \({{\mathbb {E}}}\subset \mathfrak {R}^m\). On \((\varOmega ,{{\mathcal {F}}},\mathbf{P})\) there are random variables \(\{\theta _r\}_{r=1}^m\) which have the zero-inflated geometric distributions (further \({\varvec{\pi }}:=(\pi _1,\ldots ,\pi _m)\) and \(\mathbf {p}=(p_1,\ldots ,p_m)\) mean the parameters of the prior distribution for the disorder moments):
\(\pi _r,\; p_r \in (0,1)\), \(j=1,2,\ldots \). Further, we refer to this distribution by \({\varvec{\pi }}_r\). The disorder moments at various places of the system are modeled by a multidimensional distribution. It will be subject of discussion in the Sect. 3.4.
Sensor \({\mathfrak {r}}\) follows the process which is based on switching between two, time homogeneous and independent, Markov processes \(\{X_{{\mathfrak {r}}\;n}^i\}_{n \in {{\mathbb {N}}}}\), \(i=0,1\), \({\mathfrak {r}}\in \mathbf{N} \), with the state space \(({{\mathbb {E}}}, {\mathcal {B}})\), both independent of \(\{\theta _r\}_{r=1}^m\). The number of sensors and disorder moments are usually smaller than dimension of the observed signals. To simplify the description, suppose further \( m = | \mathbf{N} |\).
Assumption 1
It is assumed that the processes \(\{X_{{\mathfrak {r}}\;n}^i\}_{n \in \overline{{\mathbb {N}}}}\) have transition densities with respect to the \(\sigma \)-finite measure \(\mu \), i.e., for any \(B\in {{\mathcal {B}}}\) we have
The random processes \(\{X_{r\;n}\}\), \(\{X_{r\;n}^0\}\), \(\{X_{r\;n}^1\}\) and the random variables \(\theta _r\) are connected via the rule: conditionally on \(\theta _r = k\)
where \(\{X_{r\;n}^1\}\) is started from \(X_{r\;k-1}^0\) (but is otherwise independent of \(X_{r\;\cdot }^0\)).
3.2 Detection of disorder at node \({\mathfrak {r}}\)
The formulation of the problem of the disorder detection (the sequential, on-line detection of the distribution change) which is the subject of analysis in this article, assumes that the a prior distribution of the moment of change is known. In Shiryaev classification [see Shiryaev (2019), Chap. 1] it is the G -model. Let \(scrS^X\) denotes the set of all stopping times with respect to the filtration \(\{{{\mathcal {F}}}_n\}_{n \in \overline{{\mathbb {N}}}}\). For any \(x\in {{\mathbb {E}}}\), \(\pi _r,\; p_r\in [0,1]\), \(c\in \mathfrak {R}_{+}\) and \(\tau _r\in {{\mathscr {S}}}^X\) the associated risk is defined as follows
where \(\mathbf{P}^{{\varvec{\pi }}_r}(\tau _r<\theta _r+d)\) is the probability of false alarm with delay d and \(\mathbf{E}^{{\varvec{\pi }}_r}\max \{\tau _r-\theta _r,0\}\) is the average delay of detecting the occurrence of disruption, respectively. In Sarnowski and Szajowski (2011) the construction of \(\tau _r^{*}\) is shown. It is done by transformation of the problem of disorder detection in the process to the optimal stopping problem for a Markov process which combine observation of the state and the posterior distribution process (cf. the Sect. 3.3). Following Ochman-Gozdek et al. (2017), the sufficient statistics for estimation of the disorder at each sensor separately is presented. When the delay for the false alarm is d the inference at moment n is based on last \(d+2\) observation of the state and the posterior process \(\varPi _{{\mathfrak {r}}\; n}\).
3.3 Relevant reformulated issues at sensors
The process \(\overrightarrow{\xi }_{rn}= (\underline{\overrightarrow{X}}_{r\;n-1-d,n},\varPi _n\)) has components
and \(\{\varPi _{rn}\}_{n\in \overline{{\mathbb {N}}}}\)- the posterior process:
The posterior process is designed as information about the distribution of the disorder instant \(\theta _r\). This bayesian estimation of the disorder moment \(\theta _r\) which is related to minimization of the risk (3), is equivalent of the OS problem for process \(\{\overrightarrow{\xi }_{rn}\}_{n\in {{\mathbb {N}}}}\) and the posterior process \(\{\varPi _{r\;n}\}_{n\in {{\mathbb {N}}}}\) with an expected payoff function \(\rho _r(x,\pi _r,\tau _r)=\mathbf{E}_{x,\pi _r}h_r(\overrightarrow{\xi }_{r\;\tau _r},{{\varvec{\Pi }}}_{r\;\tau _r})\) for sensor r. The details of the transformations are shown in (Shiryaev 2019, Sec. 2.2) and the appendices of Sarnowski and Szajowski (2011), Ochman-Gozdek et al. (2017). In further considerations we assume no delay, which means with \(d = 0\).
Every sensor along is looking for the stopping time \(\tau _r^{*}\in {{\mathscr {S}}}^X\) such that for every \((x\;\pi _r)\in {{\mathbb {E}}}\times [0,1]\)
3.4 Optimal detection problem as voting stopping game
The construction of \(\tau ^{*}\) is made by the transformation of the multilateral disorder detection problem to the voting stopping problem for the Markov process \(\overrightarrow{\xi }_{rn}\) (cf. Appendix B), where the sensor’s payoffs are the following:
The sequence \(\left( \{(\overrightarrow{\xi }_n,{\varvec{\Pi }}_n),{{\mathcal {F}}}_n\}_{n\in \overline{{\mathbb {N}}}},\mathbf{P}_{\mathbf {x},{\varvec{\pi }}}\right) \) is a Markov process. The risk function (7) is the function \(h_r(\mathbf {x}_{n-1},\mathbf {x}_{n},{{\varvec{\alpha }}}_{n})\) such that
The system disorder is determined by the equilibrium for the multilateral stopping problem as presented in Appendix B. It is rational solution because non of the sensors are interested to deviate from the strategy given by ISS determined by the construction given in Theorems 3 and 4 of Appendix B with individual payoff (7).
Remark 1
Let us recall that for one dimensional problem (one sensor) the Wald-Bellman equation takes the form [see Peskir and Shiryaev (2006), pp. 22–33]:
Its solution determines the disorder detection for one sensor problem.
4 Reliability function vs. structure disorder
Let the states of the elements be random. Assuming a random state of structure elements, we have a random value of the structure function. The expected value of the structure function is a multi-linear function from the probabilities that individual elements are in working state. Similar structure one can find in the multi-linear extension of the cooperative games proposed by Owen (1971/72) (see Appendix A, Remark 6).
Definition 5
(Ramamurthy (1990) in Chap. 3) The reliability function of a structure f on \(\mathbf{N} \) with independent components, having states \(\mathbf {X}=(X_1,\ldots ,X_n)\), is the function \(\hat{f}:[0,1]^n\rightarrow [0,1]\) defined by
where \({\varvec{\rho }}\) are components reliability.
Proposition 1
(Decomposition of f build on the independent components:)
Remark 2
(Notation-a loss of availability (reliability)) For the given vector \({\varvec{\rho }}\):
4.1 Critical path vs. disorder
Definition 6
(Hamming’s weight(norm)) A vector \(\mathbf {x}\in {\mathbb {B}}^n\) is of size r when exactly r of its components are equal to unity, i.e. \(\sum _{i=1}^nx_i=r\).
Definition 7
(Critical Path Vector (CPV)) \(\mathbf {x}\in {\mathbb {B}}^n\) is a critical path of structure f for component i if
-
1.
\(\mathbf {x}=(1_i,\mathbf {x}_{-i})\);
-
2.
\(f(\mathbf {x})=1\) and \(f(0_i,\mathbf {x}_{-i})=0\).
Remark 3
(Notation: the total number of CPV of f for i.) \(\eta _i(r,f)\) is the number of critical path vectors of f of size r for component i and \(\eta _i(f)\) is the total number of critical path vectors of f for component i.
4.2 Critical paths and importance of i (structural)
Remark 4
(Relations to the game theory) The coalition \({{\mathfrak {S}}}\subset \mathbf{N} \) is called a swing for player \({\mathfrak {i}}\) if \({\mathfrak {i}}\in {{\mathfrak {S}}}\) and \({{\mathfrak {S}}}\) is a path set (i.e. winning coalition) but \({{\mathfrak {S}}}\setminus \{{\mathfrak {i}}\}\) is not a path set (losing coalition). This way, \(\eta _{{\mathfrak {i}}}(r,f)\) is the number of swings of f of size r for player \({\mathfrak {i}}\).
Remark 5
(Absolute Banzhaf’s index \(\psi _{{\mathfrak {i}}}(f)\)) of component \({\mathfrak {i}}\) is by definition
where \(n=|\mathbf{N} |\).
An importance of \({\mathfrak {i}}\)-th is based on the change point analysis The posterior life time distributions are based on signals from sensors. Let \(\mathbf {x}_{{\mathfrak {r}}\;k}\) be the history of \({\mathfrak {r}}\) sensor signals after kth signal. The disorder moment \(\theta _{\mathfrak {r}}\) has the posterior distribution given by (4) and (5). Let us assume that the kth signals of sensors are related to the real time \(T_k\) and the distribution functions of working time of each element \(G_{j{\mathfrak {r}}}(t|\mathbf {x}_{{\mathfrak {r}}\;k})\), \(j=0,1\), (and element \({\mathfrak {r}}\) too) before (\(j=0\)) and after (\(j=1\)) disorders are given. The distribution of the working time after \(T_{\mathfrak {r}}\) is
Let \(T_k=\tau ^*\) and \(\mathbf {{\mathbf {G}}}=(G_1,G_2,\ldots ,G_n)\). The next signal at \(t\ge \tau ^*\) is the system break down. The probability density function of the life of the system is given by
The\({\mathfrak {i}}\)th’s element liability for the failure is a crucial element of the maintenance strategy construction. The proposed determination of the responsible element is based on the proposition concerning the importance measure for \({\mathfrak {i}}\)th element based on the reliability function.
Proposition 2
(Barlow and Proschan (1975)) Let f be a semi-coherent structure on \(\mathbf{N} \) and let \(\hat{f}\) be its reliability function. Assuming the absolutely continuity of the time of leaving all components, the probability failure of component \({\mathfrak {i}}\) caused by the system failure, given that the system failed at the instant of time t, is given by
5 Conclusion
In the case of the system break down at \(\tau ^*\) under the above assumption, the probability that the system failure is caused by the failure of the component \({\mathfrak {i}}\) is given by
The proposed rationalization capture in a mathematical model of important practical aspects, namely the formalization of the disorder moment for a complex system, uses in its essence a description of the associated observation (diagnostics) points in the language of cooperative games. However, the method in which the meaning of an individual system’s elements is determined in connection with their actual importance may be the preliminary stage of analyzing the reliability or readiness of the system. The initial stage would be to propose the right simple game for the method analyzed in this work.This phase of the construction of the model is not the subject of a detailed analysis in this work. The author believes that this step can be eliminated if we apply methods used in quitting games.
An important application of such an analysis is for the appropriate addition of honeypots [see Píbil et al. (2012)]. Planning structures with effective traps is easier when the mathematical model allows you to designate locations that lead to critical resources of a system from a security point of view. Particularly promising is the problem of determining important locations by determining the cost function of individual resources and their values for observers deployed in the system. Thanks to this formulation, the arbitrary structure of a simple game will be replaced by an analysis of the antagonistic game with possible cooperation [see Carpente et al. (2005), Herings and Predtetchinski (2014)].
Notes
Let \(\mathbf {x},\;\mathbf {y}\in \mathfrak {R}^n\). \(\mathbf {x}\le \mathbf {y}\) iff \(x_i\le y_i\) for all \(i=1,2,\ldots ,n\).
\({{\mathcal {G}}}^|\mathbf{N} |\) is the subset of \(\mathfrak {R}^{\mathcal {N}}\), the set of all mappings \(f:{\mathcal {N}}\rightarrow \mathfrak {R}\), such that \(f(\emptyset )=0\). \({\mathcal {N}}=2^|\mathbf{N} |\) – the set theory denotation of the family of all coalitions.
Abbreviations
- \(\mathbf{N} \) :
-
The set of players (sensors);
- \({\mathbb {B}}\) :
-
The states of the elements;
- \({\mathfrak {A}}\) :
-
A subet of players(sensors)—a coalition;
- \({{\mathfrak {N}}}\) :
-
The set of all players—the grand coalition;
- \({{\mathbb {N}}}\), \(\overline{{\mathbb {N}}}\) :
-
The set of natural numbers, the extended set of natural numbers;
- \(\mathfrak {R}\) :
-
The set of real numbers;
- \({{\mathcal {F}}}\) :
-
The family of measurable subsets of \(\varOmega \);
- \((\varOmega ,{{\mathcal {F}}},\mathbf{P})\) :
-
The probability space;
- \({{\mathscr {S}}}\) :
-
The set of stopping times;
- \({{\mathscr {S}}}^{\mathfrak {i}}\) :
-
\({\mathfrak {i}}\)th player ISS;
- |A|, \(\mathbf{card} (A)\) :
-
The cardinality of the set A;
- CPV :
-
Critical Path Vector;
- ISS :
-
The individual stopping strategy;
- SS :
-
The stopping strategy;
- ASS :
-
An aggregated SS;
- \(\delta (\cdot )\) :
-
The aggregation function;
- \(\tau _\delta (\sigma )\) :
-
ASS generated by \(\sigma \in {{\mathscr {S}}}\) and \(\delta \);
- OS :
-
The optimal stopping;
References
Barlow RE, Proschan F (1975) Importance of system components and fault tree events. Stoch Process Appl 3:153–173. https://doi.org/10.1016/0304-4149(75)90013-7
Birnbaum ZW (1969) On the importance of different components in a multicomponent system. In: Krishaiah PR (ed) Multivariate analysis, II (Proc. Second Internat. Sympos., Dayton, Ohio, June 17–22, 1968), chapter econometrics, principal components, reliability, and applications. Academic Press, New York, pp 581–592
Bojdecki T (1979) Probability maximizing approach to optimal stopping and its application to a disorder problem. Stochastics 3:61–71
Brodsky B, Darkhovsky B (1993) Nonparametric methods in change-point problems. Mathematics and its applications. Kluwer Academic Publishers, Dordrecht, p 224
Carpente L, Casas-Méndez B, García-Jurado I, van den Nouwel A (2005) Values for strategic games in which players cooperate. Int J Game Theory 33(3):397–419
Dresher M (1981) The mathematics of games of strategy. Dover Publications, Inc., New York. ISBN 0-486-64216-X. Theory and Applications, Reprint of the 1961 original
Herings PJ-J, Predtetchinski A (2014) Voting in collective stopping games. Preprint RM/13/014. GSBE, Maastricht University School of Business and Economics, Maastricht
Middleton D (1968) A structure for the Bayes analysis of the performance of detection systems with multiple sensors and sites. J Optim Theory Appl 2:125–137. https://doi.org/10.1007/BF00929588
Mine H (1959) Reliability of physical system. IRE Trans Inf Theory 5:138–151
Moretti S, Patrone F (2008) Transversality of the Shapley value. TOP 16(1):1. https://doi.org/10.1007/s11750-008-0044-5
Moulin H (1982) Game theory for the social sciences. Studies in game theory and mathematical economics. New York University Press, New York. ISBN 0-8147-5386-8/hbk; 0-8147-5387-6/pbk. Transl. from the French by the author
Nash J (1951) Non-cooperative games. Ann Math 54:286–295. https://doi.org/10.2307/1969529
Ochman-Gozdek A, Sarnowski W, Szajowski K (2017) Precision of sequential change point detection. Appl Math 44(2):267–280. https://doi.org/10.4064/am2278-5-2017
Owen G (1971/1972) Multilinear extensions of games. Manag Sci 18:P64–P79. https://doi.org/10.1287/mnsc.18.5.64
Owen G (2013) Game theory. Emerald Group Publishing Limited, Bingley
Peskir G, Shiryaev A (2006) Optimal stopping and free-boundary problems. Lectures in mathematics. ETH Zürich, Birkhäuser, Basel
Píbil R, Lisý V, Kiekintveld C, Bošanský B, Pěchouček M (2012) Game theoretic model of strategic honeypot selection in computer networks. In: Grossklags J, Walrand J (eds) Decision and game theory for security. Springer, Berlin, pp 201–220. ISBN 978-3-642-34266-0. https://doi.org/10.1007/978-3-642-34266-0_12
Ramamurthy KG (1990) Coherent structures. Springer, Dordrecht, pp 1–36. ISBN 978-94-009-2099-6. https://doi.org/10.1007/978-94-009-2099-6_1
Sarnowski W, Szajowski K (2011) Optimal detection of transition probability change in random sequence. Stochastics 83(4–6):569–581. https://doi.org/10.1080/17442508.2010.540015
Shapley LS (1953) Stochastic games. Proc Natl Acad Sci USA 39:1095–1100. https://doi.org/10.1073/pnas.39.10.1953
Shiryaev A (1961) The detection of spontaneous effects. Sov Math Dokl 2:740–743. Translation from Dokl. Akad. Nauk SSSR 138:799–801
Shiryaev AN (2006) From “disorder” to nonlinear filtering and martingale theory. In: Mathematical events of the twentieth century. Springer, Berlin, pp 371–397 https://doi.org/10.1007/3-540-29462-7_18
Shiryaev AN (2019) Stochastic disorder problems, volume 93 of Probability Theory and Stochastic Modelling. Springer, Cham. ISBN 978-3-030-01525-1; 978-3-030-01526-8. https://doi.org/10.1007/978-3-030-01526-8. With a foreword by H. Vincent Poor
Shiryayev AN (1978) Optimal Stopping Rules. Springer, New York, 1978. English translation of СтатистическиЙ последовательныЙ анализ.Оп тимальные правила остановки, 2-е изд., перераб., Наука, М., 1976 , 272 с. by A. B. Aries
Szajowski K (1992) Optimal on-line detection of outside observation. J Stat Plan Inference 30:413–426
Szajowski K (2011) Multi-variate quickest detection of significant change process. In Baras JS, Katz J, Altman E (eds) Decision and Game Theory for Security. Second international conference, GameSec 2011, College Park, MD, Maryland, USA, November 14–15, 2011, volume 7037 of Lecture notes in computer science. Springer, Berlin, pp 56–66. https://doi.org/10.1007/978-3-642-25280-8_7
Szajowski K (2015) On some distributed disorder detection. In: Steland A, Rafajłowicz E, Szajowski K (eds) Stochastic models, statistics and their applications, Wrocław, Poland, February 2015, volume 122 of Springer Proceedings in Mathematics & Statistics, chapter 21. Springer, Cham, pp 187–195. https://doi.org/10.1007/978-3-319-13881-7_21
Szajowski K, Yasuda M (1996) Voting procedure on stopping games of Markov chain. In: Anthony SO, Christer H, Thomas LC (eds) UK-Japanese Research Workshop on Stochastic Modelling in Innovative Manufacturing, July 21–22, 1995, volume 445 of Lecture Notes in Economics and Mathematical Systems, pp 68–80. Moller Centre, Churchill College, Univ. Cambridge, UK, Springer https://doi.org/10.1007/978-3-642-59105-1_6. MR98a:90159; Zbl:0878.90112
Tijs S (2003) Introduction to game theory, volume 23 of Texts and Readings in Mathematics. Hindustan Book Agency, New Delhi. ISBN 81-85931-37-2
Yasuda M, Nakagami J, Kurano M (1982) Multivariate stopping problems with a monotone rule. J Oper Res Soc Jpn 25(4):334–350. https://doi.org/10.15807/jorsj.25.334
Yoshida M (1983) Probability maximizing approach for a quickest detection problem with complicated Markov chain. J Inf Optim Sci 4:127–145
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The author declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendices
Games in coalitional form
Let us recall that a coalition is a subset of the players. Let \(\mathbf{N} \) be the set of players, \(p=|\mathbf{N} |\), and \({{{\mathcal {C}}}}=\{{\mathfrak {C}}:{\mathfrak {C}}\subset \mathbf{N} \}\) denote the class of all coalitions. The set of all player form the grand coalition: \({{\mathfrak {N}}}=\mathbf{N} \).
Definition 8
A simple game is a coalition game having the characteristic function, \(\phi (\cdot ):{{\mathcal {C}}}\rightarrow {\mathbb {B}}=\{0,1\}\).
Let us denote \({{\mathcal {W}}}=\{{\mathfrak {C}}\subset \mathbf{N} :\phi ({\mathfrak {C}})=1\}\) and \({{{\mathcal {L}}}}=\{{\mathfrak {C}}\subset \mathbf{N} :\phi ({\mathfrak {C}})=0\}\). The coalitions in \({{\mathcal {W}}}\) are called the winning coalitions, and those from \({{\mathcal {L}}}\) are called the losing coalitions.
Assumptions 1
By assumption the characteristic function satisfies the properties:
-
1.
\({{\mathfrak {N}}}\in {{\mathcal {W}}}\);
-
2.
\(\emptyset \in {{\mathcal {L}}}\);
-
3.
(the monotonicity): \({\mathfrak {T}}\subset {{\mathfrak {S}}}\in {{\mathcal {L}}}\) implies \({\mathfrak {T}}\in {{\mathcal {L}}}\).
There are various methods to describe the aggregated decision rule of the players. Like a single player, as well as all players, the final decision they can make is to accept the offer or reject it. The aggregation of decisions can be described by the logical function
The logical function \(\delta \) (cf. Yasuda et al. (1982)) can be decomposed as follows (\(\overline{x_i}=1-x_i\))
Definition 9
A cooperative n-person game in a coalitional form is an ordered pair \((\mathbf{N} ,\mathbf {v})\), where \(\mathbf{N} =\{1,2,\ldots ,n\}\) (the set of players) and \(\mathbf {v}:2^\mathbf{N} \rightarrow \mathfrak {R}\) is a map, assigning to each coalition \({{\mathfrak {S}}}\in 2^\mathbf{N} \) a real number, such that \(\mathbf {v}(\emptyset )=0\).
The function \(\mathbf {v}\) is called the characteristic function of the game, \(\mathbf {v}({{\mathfrak {S}}})\) is called the worth (or value) of the coalition \({{\mathfrak {S}}}\). Let us denote with \({{\mathcal {G}}}^n\) the collection of all characteristic functions \(\mathbf {v}\), corresponding to an n-person coalitional game \((\mathbf{N} ,\mathbf {v})\)Footnote 4 (\({\mathcal {A}}^n\) the collection of all additive characteristic functions \(\mathbf {a}\) corresponding to a game \((\mathbf{N} , \mathbf {a})\)). One basis for the vector space \({{\mathcal {G}}}^n\) is \(\{\delta ^{{\mathfrak {S}}}: {{\mathfrak {S}}}\in 2^\mathbf{N} \setminus \{\emptyset \}\}\), where \(\delta ^{{\mathfrak {S}}}:2^\mathbf{N} \rightarrow \mathfrak {R}\) is defined by \(\delta ^{{\mathfrak {S}}}({\mathfrak {T}})=1\) if \({\mathfrak {T}}={{\mathfrak {S}}}\) and \(\delta ^{{\mathfrak {S}}}({\mathfrak {T}})=0\), otherwise.
The unanimity game for \({{\mathfrak {S}}}\in 2^\mathbf{N} \) is defined by the description of the characteristic function \(u_{{\mathfrak {S}}}({\mathfrak {T}})=1\) if \({{\mathfrak {S}}}\subset {\mathfrak {T}}\) and 0 otherwise. The set \(\{u_{{\mathfrak {S}}}:{{\mathfrak {S}}}\in 2^\mathbf{N} \setminus \{\emptyset \}\}\) is another basis for \({{\mathcal {G}}}^n\).
Let us call a map \(f:{{\mathcal {G}}}^n\rightarrow \mathfrak {R}\) a solution of the games. Some desirable properties for the solutions are formulated: individual rationality (IR), efficiency (EFF), annonymity property (AN), dummy player property (DUM), additivity (ADD).
Let \(\sigma :\mathbf{N} \rightarrow \mathbf{N} \) be permutation and the related marginal vector \(\mathbf {m}^\sigma (\mathbf {v})\) for the game \((\mathbf{N} ,\mathbf {v})\). The payoff vector \(\mathbf {m}^\sigma (\mathbf {v})\) corresponds to a situation, where the players enter a play one by one in the order \(\sigma (1),\ldots ,\sigma (n)\).
Definition 10
(Shapley value) The Shapley value \({{\varvec{\Phi }}}(\mathbf {v})\) of a game \((\mathbf{N} ,\mathbf {v})\) is the average of the marginal vectors of the game.
The formula defined by the Shapley value has the form:
where \(\pi (\mathbf{N} )\) is the set of permutations of \(\mathbf{N} \).
Theorem 1
(cf. Shapley (1953)) There is a unique solution \(\mathbf {f}:{{\mathcal {G}}}^n\rightarrow \mathfrak {R}^n\) satisfying EFF, AN, DUM and ADD. This solution is the Shapley value.
Remark 6
(cf. Owen(1971/1972)) The multilinear extension \(\mathbf {f}\) of n-person game \(\mathbf {v}\) is a function defined on n-cube \(f:[0,1]^n\rightarrow \mathfrak {R}^n\) which is linear in each variable and which coincides with \(\mathbf {v}\) at the corners of the cube. We have (v. Tijs 2003):
The set of extreme points of \([0,1]^n\) is equal to \(\{e^{{\mathfrak {S}}}:{{\mathfrak {S}}}\in 2^\mathbf{N} \}\), where \(e^{{\mathfrak {S}}}=({{\mathbb {I}}}_{{\mathfrak {S}}}(1),{{\mathbb {I}}}_{{\mathfrak {S}}}(2),\ldots ,{{\mathbb {I}}}_{{\mathfrak {S}}}(n))\). As a consequence \(\mathbf {f}(e^{{\mathfrak {S}}})=\mathbf {v}({{\mathfrak {S}}})\), i.e. it has the form given by (13).
A non-cooperative stopping game
1.1 Multilateral stopping problem
Following the results of the author and Yasuda Szajowski and Yasuda (1996) the multilateral stopping of a Markov chain problem can be described in the terms of the notation used in the non-cooperative game theory [see Nash (1951), Dresher (1981), Moulin (1982), Owen (2013)]. To this end the process and utilities of its states should be specified.
Definition 11
(ISS-Individual Stopping Strategies) Let \((\overrightarrow{X}_n,{{\mathcal {F}}}_n,{\mathbf{P}}_x)\), \(n=0,1,2,\ldots ,N\), be a homogeneous Markov chain with the state space \(({{\mathbb {E}}},{{\mathcal {B}}})\).
-
The players are able to observe the Markov chain sequentially. The horizon can be finite or infinite: \(N\in {{\mathbb {N}}}\cup \{\infty \}\).
-
Each player has their utility function \(f_i: {{\mathbb {E}}}\rightarrow \mathfrak {R}\), \(i=1,2,\ldots ,p\), such that \({\mathbf{E}}_x|f_i(\overrightarrow{X}_1)|<\infty \) and the cost function \(c_i: {{\mathbb {E}}}\rightarrow \mathfrak {R}\), \(i=1,2,\ldots ,p\).
-
If the process is not stopped at moment n, then each player, based on \({{\mathcal {F}}}_n,\) can declare independently their willingness to stop the observation of the process.
Definition 12
(see Yasuda et al. (1982) ) An individual stopping strategy of the player i (ISS) is the sequence of random variables \(\{\sigma _n^i\}_{n=1}^N\), where \(\sigma _n^i:\varOmega \rightarrow \{0,1\}\), such that \(\sigma _n^i\) is \({{\mathcal {F}}}_n\)-measurable.
The interpretation of the strategy is following. If \(\sigma _n^i=1\), then player i declares that they would like to stop the process and accept the realization of \(X_n\).
Definition 13
(SS–Stopping Strategy (the aggregate function)) Denote
and let \({{\mathscr {S}}}^i\) be the set of ISSs of player i, \(i=1,2,\ldots ,p\). Define \({{\mathscr {S}}}={{\mathscr {S}}}^1\times {{\mathscr {S}}}^2\times \ldots \times {{\mathscr {S}}}^p\). The element \(\sigma =(\sigma ^1,\sigma ^2,\ldots ,\sigma ^p)^T\in {{\mathscr {S}}}\) will be called the stopping strategy (SS).
The stopping strategy \(\sigma \in {{\mathscr {S}}}\) is a random matrix. The rows of the matrix are the ISSs. The columns are the decisions of the players at successive moments. The factual stopping of the observation process, and the players realization of the payoffs is defined by the stopping strategy exploiting p-variate logical function.
Let \(\delta :\{0,1\}^p\rightarrow \{0,1\}\) be the aggregation function. In this stopping game model the stopping strategy is the list of declarations of the individual players. The aggregate function \(\delta \) converts the declarations to an effective stopping time.
Definition 14
(An aggregated SS) A stopping time \(\tau _\delta (\sigma )\) generated by the SS \(\sigma \in {{\mathscr {S}}}\) and the aggregate function \(\delta \) is defined by
\((\inf (\emptyset )=\infty )\). Since \(\delta \) is fixed during the analysis we skip index \(\delta \) and write \(\tau (\sigma )=\tau _\delta (\sigma )\).
Definition 15
(Process and utilities of its states)
-
\(\{\omega \in \varOmega : \tau _\delta (\sigma )=n\} =\bigcap \nolimits _{k=1}^{n-1}\{\omega \in \varOmega : \delta (\sigma _k^1,\sigma _k^2,\ldots ,\sigma _k^p)=0\} \cap \{\omega \in \varOmega :\delta (\sigma _n^1,\sigma _n^2,\ldots ,\sigma _n^p)=1\}\in {{\mathcal {F}}}_n\);
-
\(\tau _\delta (\sigma )\) is a stopping time with respect to \(\{{{\mathcal {F}}}_n\}_{n=1}^N\).
-
For any stopping time \(\tau _\delta (\sigma )\) and \({\mathfrak {i}}\in \{1,2,\ldots ,p\}\) the payoff of player \({\mathfrak {i}}\) is defined as follows (cf. Shiryayev (1978)):
$$\begin{aligned} f_i(X_{\tau _\delta (\sigma )})=f_i(X_n){{\mathbb {I}}}_{\{\tau _\delta (\sigma )=n\}}+\limsup _{n\rightarrow \infty }f_i(X_n){{\mathbb {I}}}_{\{\tau _\delta (\sigma )=\infty \}}. \end{aligned}$$
Definition 16
(An equilibrium strategy (cf. Szajowski and Yasuda (1996))) Let the aggregate rule \(\delta \) be fixed. The strategy \({}^{*}\!\sigma =({}^{*}\!\sigma ^1,{}^{*}\!\sigma ^2,\ldots ,{}^{*}\!\sigma ^p)^T\in {{\mathscr {S}}}\) is an equilibrium strategy with respect to \(\delta \) if for each \({\mathfrak {i}}\in \{1,2,\ldots ,p\}\) and any \(\sigma ^i\in {{\mathscr {S}}}^i\) we have
1.2 Equilibria in voting stopping game
Definition 17
(\({{\mathcal {G}}}= ({{\mathscr {S}}},\overrightarrow{f},\delta )\)) The set of SS \({{\mathscr {S}}}\), the vector of the utility functions \(\overrightarrow{f}=(f_1,f_2,\ldots , f_p)\) and the monotone rule \(\delta \) define the non-cooperative game \({{\mathcal {G}}}\) = (\({{\mathscr {S}}}\),f,\(\delta \)). The construction of the equilibrium strategy \( {}^{*}\!\sigma \in {{\mathscr {S}}}\) in \({{\mathcal {G}}}\) is provided in Szajowski and Yasuda (1996).
Definition 18
(An individual stopping set) on the state space describes the ISS of the player. Each ISS of player \({\mathfrak {i}}\) gives the sequence of stopping events \(D_n^i=\{\omega :\sigma _n^i=1\}\). For each aggregate rule \(\delta \) there exists the corresponding set value function \(\varDelta :{{\mathcal {F}}}\rightarrow {{\mathcal {F}}}\) such that \(\delta (\sigma _n^1,\sigma _n^2,\ldots ,\sigma _n^p)= \delta \{{{\mathbb {I}}}_{D_n^1}, {{\mathbb {I}}}_{D_n^2},\ldots ,{{\mathbb {I}}}_{D_n^p}\}= {{\mathbb {I}}}_{\varDelta (D_n^1,D_n^2,\ldots ,D_n^p)}\). The important class of ISS and the stopping events can be defined by subsets \({ {C}}^i \in \mathcal {B}\) of the state space \({{\mathbb {E}}}\). A given set \({ {C}}^i\in {{\mathcal {B}}}\) will be called the stopping set for player i at moment n if \(D_n^i= \{\omega :X_n\in { {C}}^i\}\) is the stopping event.
By properties of the logical function \(\delta (x^1,\ldots ,x^p)\) can be represented as
It implies that for \(D^i\in {{{\mathcal {F}}}}\) set \(\varDelta (D^1,\ldots ,D^p)\) is equal
The form of stopping sets Let \(f_i\), \(g_i\) be the real valued, integrable defined on \({{\mathbb {E}}}\). For fixed \(D_n^j\), \( j=1,2,\ldots ,p\), \(j\ne i\), and \({ {C}}^i\in \mathcal {B}\) define
where \({}^i\!D_1(A)=\varDelta (D_1^1,\ldots ,D_1^{i-1},A,D_1^{i+1},\ldots ,D_1^p)\) and \(D_1^i=\{\omega :X_n\in { {C}}^i\}\).
Lemma 1
(Technical) Let \(f_i\), \(g_i\), be integrable and let \({ {C}}^j\in \mathcal {B}\), \(j=1,2,\ldots ,p\), \(j\ne i\), be fixed. Then, set \({}^{*}\!{ {C}}^i=\{x\in {{\mathbb {E}}}:f_i(x)-g_i(x)\ge 0\}\in \mathcal {B}\) is such that
and
Based on Lemma 1 we derive the recursive formulae defining the equilibrium point and the equilibrium value for the finite horizon game.
1.3 The finite horizon game
In the finite horizon game the construction of equilibria is based on the backward induction. Let us denote the equilibrium strategy \({}^{*}\!\sigma \).
-
Let horizon N be finite. If the equilibrium strategy \({}^{*}\!\sigma \) exists, then we denote \(v_{i,N}(x)=\mathbf{E}_xf_i(X_{t({}^{*}\!\sigma )})\) the equilibrium payoff of i-th player when \(X_0=x\).
-
Let \({{\mathscr {S}}}_n^i=\{\{\sigma _k^i\},k=n,\ldots ,N\}\) be the set of ISS for moments \(n\le k\le N\) and \({{\mathscr {S}}}_n={{\mathscr {S}}}_n^1\times {{\mathscr {S}}}_n^2\times \ldots \times {{\mathscr {S}}}_n^p\).
-
SS for moments not earlier than n is \({}^n\!\sigma =({}^n\!\sigma ^1,{}^n\!\sigma ^2,\ldots ,{}^n\!\sigma ^p) \in {{\mathscr {S}}}_n\), where \({}^n\!\sigma ^i=(\sigma _n^i,\sigma _{n+1}^i,\ldots ,\sigma _N^i)\).
-
\(t_n=t_n(\sigma )=t(^n\sigma )=\inf \{n\le k\le N:\delta (\sigma _k^1,\sigma _k^2,\ldots , \sigma _k^p)=1\}\) (a stopping time not earlier than n).
Definition 19
(Equilibrium in\({{\mathscr {S}}}_n\)) The stopping strategy \({}^{n*}\!\sigma =({}^{n*}\!\sigma ^1,{}^{n*}\!\sigma ^2,\ldots ,{}^{n*}\!\sigma ^p)\) is an equilibrium in \({{\mathscr {S}}}_n\) if \(\mathbf{P}_x-\text{ a.e. }\)
for every \(i\in \{1,2,\ldots ,p\}\), where
Denote
At moment \(n=N\) the players have to declare to stop and \(v_{i,0}(x)=f_i(x)\). Let us assume that the process is not stopped up to moment n, the players are using the equilibrium strategies \({}^{*}\!\sigma _k^i\), \(i=1,2,\ldots ,p,\) at moments \(k=n+1,\ldots ,N\). Choose player i and assume that other players are using the equilibrium strategies \({}^{*}\!\sigma _n^j\), \(j\ne i\), and player i is using strategy \(\sigma _n^i\) defined by a stopping set \({ {C}}^i\).
The expected payoff \(\varphi _{N-n}(X_{n-1},{ {C}}^i) \) of player i in the game starting at moment n, when the state of the Markov chain at moment \(n-1\) is \(X_{n-1\text{, }}\) is equal to
where \({}^{i*}\!D_n(A)=\varDelta ({}^{*}\!D_n^1,\ldots ,{}^{*}\!D_n^{i-1},A,{}^{*}\!D_n^{i+1},\ldots ,{}^{*}\!D_n^p)\).
By Lemma 1 the conditional expected gain \(\varphi _{N-n}(X_{N-n}, { {C}}^i)\) attains the maximum on the stopping set \({}^{*}\!{ {C}}_n^i=\{x\in {{\mathbb {E}}}:f_i(x)-v_{i,N-n}(x)\le 0\}\) and
\(\mathbf{P}_x-\)a.e.. It allows to formulate the following construction of the equilibrium strategy and the equilibrium value for the game \(\mathcal {G}\).
Theorem 2
(Solution of the finite horizon stopping game based on voting) In the game \(\mathcal {G}\) with finite horizon N we have the following solution.
-
(i)
The equilibrium value \(v_i(x)\), \(i=1,2,\ldots ,p\), of the game \(\mathcal {G}\) can be calculated recursively as follows:
-
1.
\(v_{i,0}(x)=f_i(x)\);
-
2.
For \(n=1,2,\ldots ,N\) we have \(\mathbf{P}_x-\)a.e.
$$\begin{aligned} (v_{i,n}-c_i)(X_{N-n})= & {} \mathbf{E}_x[(f_i-v_{i,n-1})(X_{N-n+1}))^{+} {{\mathbb {I}}}_{{}^{i*}\!D_{N-n+1}(\varOmega )}|{{\mathcal {F}}}_{N-n}] \\&- \mathbf{E}_x[(f_i-v_{i,n-1})(X_{N-n+1}))^{-}{{\mathbb {I}}}_{{}^{i*}\!D_{N-n+1}(\emptyset )}|{{\mathcal {F}}}_{N-n}] \\&+ \mathbf{E}_x[v_{i,n-1}(X_{N-n+1})|{{\mathcal {F}}}_{N-n}], \end{aligned}$$for \(i=1,2,\ldots ,p\).
-
1.
-
(ii)
The equilibrium strategy \({}^{*}\!\sigma \in {{\mathscr {S}}}\) is defined by the SS of the players \({}^{*}\!\sigma _n^i\), where \({}^{*}\!\sigma _n^i=1\) if \( X_n\in {}^{*}\!{ {C}}_n^i\), and \({}^{*}\!{ {C}}_n^i=\{x\in {{\mathbb {E}}}: f_i(x)-v_{i,N-n}(x) \le 0\}\), \(n=0,1,\ldots ,N\).
We have \(v_i(x)=v_{i,N}(x)\), and \(\mathbf{E}_xf_i(X_{t({}^{*}\!\sigma )})=v_{i,N}(x)\), \(i=1,2,\ldots ,p\).
1.4 The infinite horizon game
Let us assume that there exists a solution \((w_1(x),w_2(x),\ldots ,w_p(x))\) of the equations
\(i=1,2,\ldots ,p\). Consider the stopping game with the following payoff function for \(i=1,2,\ldots ,p\).
Lemma 2
Let \({}^{*}\!\sigma {}\in {{\mathscr {S}}}_f^{*}\) be an equilibrium strategy in the infinite horizon game \(\mathcal {G}\). For every N we have
Let us assume that for \(i=1,2,\ldots ,p\) and every \(x\in {{\mathbb {E}}}\) we have
Theorem 3
Let \((X_n,{{\mathcal {F}}}_n,\mathbf{P}_x)_{n=0}^\infty \) be a homogeneous Markov chain and the payoff functions of the players fulfill (1). If \(t^{*}=t({}^{*}\!\sigma )\), \({}^{*}\!\sigma \in {{\mathscr {S}}}_f^{*}\), then \(\mathbf{E}_xf_i(X_{t^{*}})=v_i(x)\).
Theorem 4
Let the stopping strategy \({}^{*}\!\sigma \in {{\mathscr {S}}}_f^{*}\) be defined by the stopping sets \({}^{*}\!{ {C}}_n^i=\{x\in {{\mathbb {E}}}:f_i(x)\le v_i(x)\}\), \(i=1,2,\ldots ,p\), then \({}^{*}\!\sigma \) is the equilibrium strategy in the infinite stopping game \(\mathcal {G}\).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Szajowski, K.J. Rationalization of detection of the multiple disorders. Stat Papers 61, 1545–1563 (2020). https://doi.org/10.1007/s00362-020-01168-2
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-020-01168-2