1 Introduction

The purpose of this work is to match mathematical tools and methods that allow to analyze threats in complex systems that cause their dysfunctions. In the existing literature on the subject of the mathematical modeling of systems and their analysis, the concept of availability appears. Functionality can also be quantified and one can talk about the reduction of functionality. The very concept of reliability also gives the possibility of subtle treatment. Based on various observations in this area, a mathematical model was created to measure the level of safety associated with its operation. You can consider the internal security of the system itself, as well as examine the system’s impact on the environment. If we talk about system availability, we mean its ability to meet the expectations of designers. Keeping such a system ready is associated with launching an effective diagnostic process and a maintenance plan. The presented considerations are aimed at analyzing models of system reliability that will allow for the rationalization of maintenance procedures (repair, maintenance, diagnostics). We do not meet the needs, we only assume the limited resources as a paradigm that allows to decide locally what subsystem is responsible for the increased chance of losing the availability of the system functionality.

The availability and its control depends on various factors. It is not only reliability, but also the difficulty of maintenance, the ability to diagnose components which need service and an efficient implementation of reparation tasks. If we assume that the need for maintenance is the result of disorder, then determining the key elements responsible for the event is crucial work. Assessing the importance of the element for the system disorders is very helpful in the disorder location. The proposed procedure is based on the estimation of the disorder moment for the complex system (cf. Szajowski 2011, 2015). The location of the elements responsible for the disorder is proposed on the basis of the responsibility measure (the Barlow and Proschan’s importance measure).

To implement the presented goal, we will use methods of detecting a change point in the behavior of system elements. This aspect is described in Sect. 2. A significant support in these considerations is provided by the methods of the cooperative game theory presented in Appendix A and the selected antagonistic model with stopping stochastic processes which is described in Appendix B. Linking the observation of sensors with the global objectives of the analyzed system together with rational guidelines for the sensor center we discuss in Sect. 4. Before Sect. 4 there is a discussion of the important analogies of the cooperative game theory to structural reliability, which is in Sect. 3.

As a conclusion, the main objective of this article is choosing the appropriate moment to service the system [(cf. Ramamurthy (1990)] and to locate the source of disorder [(cf. (Ramamurthy (1990), Chap. 3, Middleton 1968].

2 Disorder of structure vs. disorder of components.

Classic reliability is life time, or rather life time distribution. This approach applied to a complex system naturally prompts you to analyze the relationship between the reliability of components and the entire system. Let \(\mathbf{N} =\{1,2,\ldots ,{\mathfrak {n}}\}\) be indexes of the elements. The mathematical model assumes binary elements and systems, i.e. the possible states are “functional” or “damaged”. It is universal in the sense that it uses the functions defined in the state space of elements \({{\mathbb {B}}}^n\) and the state space \({\mathbb {B}}\) of the system, where \({{\mathbb {B}}}= \{0,1\}\), and \( n=|\mathbf{N} |\) is the number of components of the system. The analysis of the change in the state of reliability is the study of changes in the distribution of life time, or rather the remaining life time. It consists of detecting events that are not observable, but detectable on the basis of symptoms, i.e. secondary phenomena, whose cause-effect relationship with life time is known. Establishing relationships involves determining the deterministic relationship between the reliability of elements and structure. Because the state space is described as binary vectors, and the state of the system is also binary, the structure models are binary functions in the state space [cf. Moretti and Patrone (2008)]. The state of the system and the components are considered at a fixed moment of time. It is assumed that the state of the system depends on the state of components only. We shall distinguish between two states only - when a component is disordered or not. A state of the system has also such dichotomies value. A similar approach is proposed for the reliability analysis by Mine (1959). We will analyze the importance of elements of a complex structure for the disorder (availability) of the system. The basis for the structural element assessment are the importance measures for multi-component systems introduced by Birnbaum (1969). Barlow and Proschan (1975) extended the investigation on the reliability element assessment by the importance measures taking into account the reliability of the elements.

Definition 1

(Structure function) Let us consider the dichotomous elements having states from the set \({{\mathbb {B}}}=\{0,1\}\). The structure function is \(f: {{\mathbb {B}}}^n\rightarrow {{\mathbb {B}}}\).

To indicate the state of the \({\mathfrak {i}}\)th component we assign a binary indicator variable \(x_{\mathfrak {i}}\) to component \({\mathfrak {i}}\). There are two states of the component: \(\mathbf{F} =``\text {Failed}''\) or \(\mathbf{W} =``\text {Working}''\). We have \(x_{\mathfrak {i}}={{\mathbb {I}}}_\mathbf{W }(s)\) where \(s\in \{\mathbf {W},\mathbf {F}\}\) is the current state of the component. The state of the structure is the value of the structure function on the states of the components. For any \(\mathbf {x}\in {\mathbb {B}}^n\) denote

$$\begin{aligned} \mathbf {x}_{-i}&=(x_1,\ldots ,x_{i-1},x_{i+1},\ldots ,x_{\mathfrak {n}});\\ (a_i,\mathbf {x}_{-i})&=(x_1,\ldots ,x_{i-1},a_i,x_{i+1},\ldots ,x_{\mathfrak {n}}). \end{aligned}$$

Definition 2

(Irrelevant component) Let f be a structure on \(\mathbf{N} \) and \({\mathfrak {i}}\in \mathbf{N} {}\). The element \({\mathfrak {i}}\) is irrelevant to the structure f if \(f(1,\mathbf {x}_{-i})=f(0,\mathbf {x}_{-i})\) for all \(\mathbf {x}_{-{\mathfrak {i}}}\in {\mathbb {B}}^{n-1}\).

Let g and h be two structures on \(\mathbf{N} \). The linear composition of these two structures is a structure f on \(\mathbf{N} \cup \{{\mathfrak {n}}+1\}\) defined on \({\mathbb {B}}^{{\mathfrak {n}}+1}\) by

$$\begin{aligned} f(\mathbf {x}_{-({\mathfrak {n}}+1)},x_{{\mathfrak {n}}+1})=x_{{\mathfrak {n}}+1}g(\mathbf {x}_{-({\mathfrak {n}}+1)})+(1-x_{{\mathfrak {n}}+1})h(\mathbf {x}_{-({\mathfrak {n}}+1)}). \end{aligned}$$

Let f be the structure on \(\mathbf{N} \). Its dual \(f^D\) is another structure on \(\mathbf{N} \) defined on \({\mathbb {B}}^n\) by

$$\begin{aligned} f^D(\mathbf {x})=1-f(\mathbf {1}-\mathbf {x}) \text { for all } \mathbf {x}\in {\mathbb {B}}^n. \end{aligned}$$

where \(\mathbf {1}=(1,1,\ldots ,1)\).

Definition 3

(Monotone structure function) Let f be a structure on \(\mathbf{N} \) and \({\mathbb {B}}^n\) the vector space with the partial order “\(\le \)”.Footnote 1 The structure function f is monotone if for any \(\mathbf {x},\mathbf {y}\in {\mathbb {B}}^n\)

$$\begin{aligned} \mathbf {x}\ge \mathbf {y}\Longrightarrow f(\mathbf {x})\ge f(\mathbf {y}). \end{aligned}$$

Definition 4

(Coherent structure (function)) A monotone structure f on \(\mathbf{N} \) is called semi-coherent if \(f(\mathbf {0})=0\) and \(f(\mathbf {1})=1\). A semi-coherent structure (function) f is called coherent if all component in \(\mathbf{N} \) are relevant to f.

Let us denote \({\mathcal {N}}=\{{\mathfrak {J}}:{\mathfrak {J}}\subset \mathbf{N} \}\) – the family of all subsets of \(\mathbf{N} \). The elements of \({\mathcal {N}}\) are called subsystems. The vector \(\mathbf {x}^{\mathfrak {J}}\in {\mathbb {B}}^{|{\mathfrak {J}}|}\) represents the states of components in the set \({\mathfrak {J}}\). For \({\mathfrak {A}},{\mathfrak {B}},{\mathfrak {C}}\in {\mathcal {N}}\), the disjoint decomposition of \(\mathbf{N} \), the vector \((\mathbf {1}^{\mathfrak {A}},\mathbf {0}^{\mathfrak {B}},\mathbf {x}^{\mathfrak {C}})\in {\mathbb {B}}^n\), with elements arranged in the proper order, represents the situation where all the components in the subsystem \({\mathfrak {A}}({\mathfrak {B}})\) are in the working(failed) state and the states of the components in \({\mathfrak {C}}\in {\mathcal {N}}\) are as specified by the binary vector \(\mathbf {x}^{\mathfrak {C}}\).

Let f be a structure on \(\mathbf{N} \), \({\mathfrak {A}}\subset \mathbf{N} \) and \({\mathfrak {J}}=\mathbf{N} \setminus {\mathfrak {A}}\). The subset \({\mathfrak {A}}\) of \(\mathbf{N} \) is called path (cut) of the structure f if \(f(\mathbf {1}^{\mathfrak {A}},\mathbf {0}^{\mathfrak {J}})=1\) (\(f(\mathbf {0}^{\mathfrak {A}},\mathbf {1}^{\mathfrak {J}})=1\)). Let f be the structure on \(\mathbf{N} \). A path (cut) set \({\mathfrak {C}}\) of f is called a minimal path (cut) set of f if \({\mathfrak {A}}\subset {\mathfrak {C}}\) implies that \({\mathfrak {A}}\) is not a path (cut) set of f.

3 Structure vs. simple game

You can treat selected structure components as players in a cooperative game. Their role in the structure is reduced to monitoring the assigned area. The smooth operation of the monitored subsystem requires communication to the management center of the entire message system: I have no operational problems in the supervised area or the area observed has ceased to perform its functions. A complex system, observed in the described way from the selected elements, can be treated as players with a common goal for our purposes. Their purpose is to detect threats and signal the observed anomaly to the management center, so that there are not too many false alarms, and the detection of critical anomalies is effective. One of the elements leading to the description of such a system is to determine how to reward observers so that the reward for the correct and rapid detection of disturbance gives a signal that is important, but excessive sensitivity and frequent signal overinterpretation are punished. These premises create basically an antagonistic game with elements of cooperation [cf. Carpente et al. (2005)]. In dynamic systems, which are social or technical systems, the given goals are implemented by appropriately selected strategies (controls). The person making the decision on signaling the risk de facto decides to launch the examination or a repair procedure. If the goal is to determine the point of change (deregulation), then Markov moments are natural strategies. In the next stages of model construction, methods of system description and change point detection will be combined. The elements of observer cooperation are modeled by cooperative games, including simple games.

Roughly speaking, a complex system, observed from selected elements, which we can treat as players with a common goal—detecting the threat of signaling the observed anomaly so that there are no false alarms, and the detection is effective. In addition, each observer is rewarded for the correct and quick detection. Elements of the observer cooperation are modeled by simple game methods.

3.1 Voting decision

Suppose the selected places are closely observed in the system. At each such point we collect information, and in the end we also want to be able to assess the significance of this information and make decisions about sending warning signals. The decision on the state of the entire system is made in the center, where the received signals from equal points are synthesized into one decision or an assessment of whether the system as a whole is operational or does not perform the assumed functions. The specificity of the analyzed system and its importance requires that detection be misaligned, but without false alarms. The structure of the collective decision is based on democratic principles, i.e. the signals sent from observation centers are treated as votes from experts, and the rules for taking these votes into account are governed by the rules of simple games (see Appendix A).

Let us describe the complex system monitored by sensors. There are various signals in the system which model the state or information about the state of the nodes. Let us describe them by a process \(\{\overrightarrow{X}_n,n\in \overline{{\mathbb {N}}}\}\), \(\overline{{\mathbb {N}}}=\{0\}\cup {{\mathbb {N}}}\), defined on \((\varOmega ,{{\mathcal {F}}},\mathbf{P})\). The process is observed sequentially and it delivers knowledge about the state of each sensor, e.g.rth (gets some of its coordinates from the vector \(\overrightarrow{X}_n\) at moment n). For further analysis it is assumed that the processes observed at nodes have the Markov structure given the random moment \(\theta _r\)—the moments of transition probabilities change. Different transition probabilities correspond to different states of the analyzed area. When constructing a mathematical model of a complex system, we determine what is the desired dynamics of the observation and what is anomalous. We assume that the change in the system dynamics from expected to undesirable occurs in an instant that is unknown. We only know its typical probabilistic properties. The goal is to determine when the system as a whole is disordered. The system state describes the structure function on the basis of the state of individual subsystems. However, the true state of the subsystem is not known directly, but only the values of the observable components of the vector \( \overrightarrow{X}_{n} \) are read(observed) by the sensors. Each sensor is responsible for measuring certain components. The sensor’s secondary, integral function is to evaluate the observation and send a signal—the decision on the requested state of the subsystem being in the area of direct sensor supervision. An important subject of the findings of the system analysis is the rational way of transformation the observation available on the sensor for its dichotomous decision.

The principles of the construction of models with a change (disorder) of signals indicating changes in the studied area, which we use in this approach are known in works by  Shiryaev (1961).Footnote 2 Various modifications and generalizations of the problem formulated in this way are the subject of the work by  Brodsky and Darkhovsky (1993),  Bojdecki (1979),  Yoshida (1983),  Szajowski (1992). The detection of disorders with given precision [cf. Sarnowski and Szajowski (2011)] is most appropriate approach to adopt for the problem under consideration.

The formulation of the model needs filtration (aggregated knowledge about the system history) and a priori distribution of disorders. \(\{\overrightarrow{X}_n\}_{n \in {{\mathbb {N}}}}\) are consistent with the filtration \({\mathcal {F}}_n\) and the vectors \(\overrightarrow{X}_n:\varOmega \rightarrow {{\mathbb {E}}}\), where \({{\mathbb {E}}}\subset \mathfrak {R}^m\). On \((\varOmega ,{{\mathcal {F}}},\mathbf{P})\) there are random variables \(\{\theta _r\}_{r=1}^m\) which have the zero-inflated geometric distributions (further \({\varvec{\pi }}:=(\pi _1,\ldots ,\pi _m)\) and \(\mathbf {p}=(p_1,\ldots ,p_m)\) mean the parameters of the prior distribution for the disorder moments):

$$\begin{aligned} \mathbf{P}(\theta _r = 0) =\pi _r\text { and }\mathbf{P}(\theta _r = j) =(1-\pi _r) p_r^{j-1}(1-p_r), \end{aligned}$$
(1)

\(\pi _r,\; p_r \in (0,1)\), \(j=1,2,\ldots \). Further, we refer to this distribution by \({\varvec{\pi }}_r\). The disorder moments at various places of the system are modeled by a multidimensional distribution. It will be subject of discussion in the Sect. 3.4.

Sensor \({\mathfrak {r}}\) follows the process which is based on switching between two, time homogeneous and independent, Markov processes \(\{X_{{\mathfrak {r}}\;n}^i\}_{n \in {{\mathbb {N}}}}\), \(i=0,1\), \({\mathfrak {r}}\in \mathbf{N} \), with the state space \(({{\mathbb {E}}}, {\mathcal {B}})\), both independent of \(\{\theta _r\}_{r=1}^m\). The number of sensors and disorder moments are usually smaller than dimension of the observed signals. To simplify the description, suppose further \( m = | \mathbf{N} |\).

Assumption 1

It is assumed that the processes \(\{X_{{\mathfrak {r}}\;n}^i\}_{n \in \overline{{\mathbb {N}}}}\) have transition densities with respect to the \(\sigma \)-finite measure \(\mu \), i.e., for any \(B\in {{\mathcal {B}}}\) we have

$$\begin{aligned} \mathbf{P}_x^{i}(X_{r\;1}^{i}\in B)= & {} \mathbf{P}(X_{r\;1}^{i}\in B|X_{r\;0}^{i}=x)=\int _Bf_x^{r\;i}(y)\mu (dy). \end{aligned}$$
(2)

The random processes \(\{X_{r\;n}\}\), \(\{X_{r\;n}^0\}\), \(\{X_{r\;n}^1\}\) and the random variables \(\theta _r\) are connected via the rule: conditionally on \(\theta _r = k\)

$$\begin{aligned} X_{r\;n}= & {} X_{r\;n}^0{{\mathbb {I}}}_{\{k:k>n\}}(k)+ X_{r\;n+1-k}^1{{\mathbb {I}}}_{\{k:k\le n\}}(k) \end{aligned}$$

where \(\{X_{r\;n}^1\}\) is started from \(X_{r\;k-1}^0\) (but is otherwise independent of \(X_{r\;\cdot }^0\)).

3.2 Detection of disorder at node \({\mathfrak {r}}\)

The formulation of the problem of the disorder detection (the sequential, on-line detection of the distribution change) which is the subject of analysis in this article, assumes that the a prior distribution of the moment of change is known. In Shiryaev classification [see Shiryaev (2019), Chap. 1] it is the G -model. Let \(scrS^X\) denotes the set of all stopping times with respect to the filtration \(\{{{\mathcal {F}}}_n\}_{n \in \overline{{\mathbb {N}}}}\). For any \(x\in {{\mathbb {E}}}\), \(\pi _r,\; p_r\in [0,1]\), \(c\in \mathfrak {R}_{+}\) and \(\tau _r\in {{\mathscr {S}}}^X\) the associated risk is defined as follows

$$\begin{aligned} \rho _r(x_r,\pi _r,\tau _r)= \mathbf{P}^{{\varvec{\pi }}_r}( \tau _r < \theta _r+d ) + c_r\mathbf{E}^{{\varvec{\pi }}_r}\max \{\tau _r-\theta _r,0\}, \end{aligned}$$
(3)

where \(\mathbf{P}^{{\varvec{\pi }}_r}(\tau _r<\theta _r+d)\) is the probability of false alarm with delay d and \(\mathbf{E}^{{\varvec{\pi }}_r}\max \{\tau _r-\theta _r,0\}\) is the average delay of detecting the occurrence of disruption, respectively. In Sarnowski and Szajowski (2011) the construction of \(\tau _r^{*}\) is shown. It is done by transformation of the problem of disorder detection in the process to the optimal stopping problem for a Markov process which combine observation of the state and the posterior distribution process (cf. the Sect. 3.3). Following Ochman-Gozdek et al. (2017), the sufficient statistics for estimation of the disorder at each sensor separately is presented. When the delay for the false alarm is d the inference at moment n is based on last \(d+2\) observation of the state and the posterior process \(\varPi _{{\mathfrak {r}}\; n}\).

3.3 Relevant reformulated issues at sensors

The process \(\overrightarrow{\xi }_{rn}= (\underline{\overrightarrow{X}}_{r\;n-1-d,n},\varPi _n\)) has components

$$\begin{aligned} \underline{\overrightarrow{X}}_{r\;n-1-d,n}&=(\overrightarrow{X}_{r\;n-1-d},\ldots ,\overrightarrow{X}_{r\;n}) \end{aligned}$$

and \(\{\varPi _{rn}\}_{n\in \overline{{\mathbb {N}}}}\)- the posterior process:

$$\begin{aligned} \varPi _{r0}= & {} 0,\end{aligned}$$
(4)
$$\begin{aligned} \varPi _{rn}= & {} \mathbf{P}_x\left( \theta _r \le n \mid {\mathcal {F}}_n\right) ,\; n = 1, 2, \ldots \end{aligned}$$
(5)

The posterior process is designed as information about the distribution of the disorder instant \(\theta _r\). This bayesian estimation of the disorder moment \(\theta _r\) which is related to minimization of the risk (3), is equivalent of the OS problem for process \(\{\overrightarrow{\xi }_{rn}\}_{n\in {{\mathbb {N}}}}\) and the posterior process \(\{\varPi _{r\;n}\}_{n\in {{\mathbb {N}}}}\) with an expected payoff function \(\rho _r(x,\pi _r,\tau _r)=\mathbf{E}_{x,\pi _r}h_r(\overrightarrow{\xi }_{r\;\tau _r},{{\varvec{\Pi }}}_{r\;\tau _r})\) for sensor r. The details of the transformations are shown in (Shiryaev 2019, Sec. 2.2) and the appendices of Sarnowski and Szajowski (2011), Ochman-Gozdek et al. (2017). In further considerations we assume no delay, which means with \(d = 0\).

Every sensor along is looking for the stopping time \(\tau _r^{*}\in {{\mathscr {S}}}^X\) such that for every \((x\;\pi _r)\in {{\mathbb {E}}}\times [0,1]\)

$$\begin{aligned} \rho ^\star (x,\pi _r)=\rho _r(x,\pi _r,\tau _r^\star )=\inf _{\tau _r\in {{\mathscr {S}}}^X}\rho _r(x,\pi _r,\tau _r). \end{aligned}$$
(6)

3.4 Optimal detection problem as voting stopping game

The construction of \(\tau ^{*}\) is made by the transformation of the multilateral disorder detection problem to the voting stopping problem for the Markov process \(\overrightarrow{\xi }_{rn}\) (cf. Appendix B), where the sensor’s payoffs are the following:

$$\begin{aligned} \rho _r(\mathbf {x},{\varvec{\pi }},\tau _r)= \mathbf{E}_{\mathbf {x},{\varvec{\pi }}}\left\{ (1-{\varvec{\Pi }}_{r\tau _r})+c_r\sum _{k=0}^{\tau _r-1}{\varvec{\Pi }}_{r\;k}\right\} . \end{aligned}$$
(7)

The sequence \(\left( \{(\overrightarrow{\xi }_n,{\varvec{\Pi }}_n),{{\mathcal {F}}}_n\}_{n\in \overline{{\mathbb {N}}}},\mathbf{P}_{\mathbf {x},{\varvec{\pi }}}\right) \) is a Markov process. The risk function (7) is the function \(h_r(\mathbf {x}_{n-1},\mathbf {x}_{n},{{\varvec{\alpha }}}_{n})\) such that

$$\begin{aligned} \mathbf{E}_{\mathbf {x},{\varvec{\pi }}}\left[ 1-{\varvec{\Pi }}_{r\tau _r}|{{\mathcal {F}}}_n\right] = \mathbf{E}_{\mathbf {x},{\varvec{\pi }}}\left[ h_r(\overrightarrow{\xi }_{r\;n+1},{\varvec{\Pi }}_{r\;n+1})|{{\mathcal {F}}}_n\right] . \end{aligned}$$
(8)

The system disorder is determined by the equilibrium for the multilateral stopping problem as presented in Appendix B. It is rational solution because non of the sensors are interested to deviate from the strategy given by ISS determined by the construction given in Theorems 3 and 4 of Appendix B with individual payoff (7).

Remark 1

Let us recall that for one dimensional problem (one sensor) the Wald-Bellman equation takes the form [see Peskir and Shiryaev (2006), pp. 22–33]:

$$\begin{aligned} \rho ^\star (x_r,\pi _r)=\min \{1-\pi _r,c_r\pi _r+\mathbf{E}_{x_r,{\varvec{\pi }}_r}\rho ^\star (\overrightarrow{\xi }_{r\;1},{\varvec{\pi }}_{r\; 1})\}. \end{aligned}$$
(9)

Its solution determines the disorder detection for one sensor problem.

4 Reliability function vs. structure disorder

Let the states of the elements be random. Assuming a random state of structure elements, we have a random value of the structure function. The expected value of the structure function is a multi-linear function from the probabilities that individual elements are in working state. Similar structure one can find in the multi-linear extension of the cooperative games proposed by Owen (1971/72) (see Appendix A, Remark 6).

Definition 5

(Ramamurthy (1990)  in Chap. 3) The reliability function of a structure f on \(\mathbf{N} \) with independent components, having states \(\mathbf {X}=(X_1,\ldots ,X_n)\), is the function \(\hat{f}:[0,1]^n\rightarrow [0,1]\) defined by

$$\begin{aligned} \hat{f}({{\varvec{\rho }}})=\mathbf{P}\big (\omega :f(\mathbf {X})=1\big )=\mathbf{E}f(\mathbf {X}), \end{aligned}$$

where \({\varvec{\rho }}\) are components reliability.

Proposition 1

(Decomposition of f build on the independent components:)

$$\begin{aligned} \hat{f}({\varvec{\rho }})=\rho _i\hat{f}(1,{\varvec{\rho }}_{-i})+(1-\rho _i)\hat{f}(0,{\varvec{\rho }}_{-i}). \end{aligned}$$

Remark 2

(Notation-a loss of availability (reliability)) For the given vector \({\varvec{\rho }}\):

$$\begin{aligned} \hat{f}_{(i)}({\varvec{\rho }})=\hat{f}(1,{\varvec{\rho }}_{-i})-\hat{f}(0,{\varvec{\rho }}_{-i}). \end{aligned}$$

4.1 Critical path vs. disorder

Definition 6

(Hamming’s weight(norm)) A vector \(\mathbf {x}\in {\mathbb {B}}^n\) is of size r when exactly r of its components are equal to unity, i.e. \(\sum _{i=1}^nx_i=r\).

Definition 7

(Critical Path Vector (CPV)) \(\mathbf {x}\in {\mathbb {B}}^n\) is a critical path of structure f for component i if

  1. 1.

    \(\mathbf {x}=(1_i,\mathbf {x}_{-i})\);

  2. 2.

    \(f(\mathbf {x})=1\) and \(f(0_i,\mathbf {x}_{-i})=0\).

Remark 3

(Notation: the total number of CPV of f for i.) \(\eta _i(r,f)\) is the number of critical path vectors of f of size r for component i and \(\eta _i(f)\) is the total number of critical path vectors of f for component i.

4.2 Critical paths and importance of i (structural)

Remark 4

(Relations to the game theory) The coalition \({{\mathfrak {S}}}\subset \mathbf{N} \) is called a swing for player \({\mathfrak {i}}\) if \({\mathfrak {i}}\in {{\mathfrak {S}}}\) and \({{\mathfrak {S}}}\) is a path set (i.e. winning coalition) but \({{\mathfrak {S}}}\setminus \{{\mathfrak {i}}\}\) is not a path set (losing coalition). This way, \(\eta _{{\mathfrak {i}}}(r,f)\) is the number of swings of f of size r for player \({\mathfrak {i}}\).

Remark 5

(Absolute Banzhaf’s index \(\psi _{{\mathfrak {i}}}(f)\)) of component \({\mathfrak {i}}\) is by definition

$$\begin{aligned} \psi _{\mathfrak {i}}(f)=\frac{\eta _{\mathfrak {i}}(f)}{2^{n-1}} \end{aligned}$$

where \(n=|\mathbf{N} |\).

An importance of \({\mathfrak {i}}\)-th is based on the change point analysis The posterior life time distributions are based on signals from sensors. Let \(\mathbf {x}_{{\mathfrak {r}}\;k}\) be the history of \({\mathfrak {r}}\) sensor signals after kth signal. The disorder moment \(\theta _{\mathfrak {r}}\) has the posterior distribution given by (4) and (5). Let us assume that the kth signals of sensors are related to the real time \(T_k\) and the distribution functions of working time of each element \(G_{j{\mathfrak {r}}}(t|\mathbf {x}_{{\mathfrak {r}}\;k})\), \(j=0,1\), (and element \({\mathfrak {r}}\) too) before (\(j=0\)) and after (\(j=1\)) disorders are given. The distribution of the working time after \(T_{\mathfrak {r}}\) is

$$\begin{aligned} G_{\mathfrak {r}}(t)=\varPi _{{\mathfrak {r}}\;k}(\mathbf {x}_k)G_{0{\mathfrak {r}}}(t|\mathbf {x}_{{\mathfrak {r}}\;k})+(1-\varPi _{{\mathfrak {r}}\;k}(\mathbf {x}_k))G_{1{\mathfrak {r}}}(t|\mathbf {x}_{{\mathfrak {r}}\;k}). \end{aligned}$$
(10)

Let \(T_k=\tau ^*\) and \(\mathbf {{\mathbf {G}}}=(G_1,G_2,\ldots ,G_n)\). The next signal at \(t\ge \tau ^*\) is the system break down. The probability density function of the life of the system is given by

$$\begin{aligned} \sum _{j=1}^n\hat{f}_{j}(\mathbf {{\mathbf {1}}} -\mathbf {{\mathbf {G}}}(t)) g_j(t). \end{aligned}$$

The\({\mathfrak {i}}\)th’s element liability for the failure is a crucial element of the maintenance strategy construction. The proposed determination of the responsible element is based on the proposition concerning the importance measure for \({\mathfrak {i}}\)th element based on the reliability function.

Proposition 2

(Barlow and Proschan (1975)) Let f be a semi-coherent structure on \(\mathbf{N} \) and let \(\hat{f}\) be its reliability function. Assuming the absolutely continuity of the time of leaving all components, the probability failure of component \({\mathfrak {i}}\) caused by the system failure, given that the system failed at the instant of time t, is given by

$$\begin{aligned} \frac{\hat{f}_{\mathfrak {i}}(\mathbf {{\mathbf {1}}}-\mathbf {G}(t))g_{\mathfrak {i}}(t)}{\sum _{j=1}^n\hat{f}_j(\mathbf {{\mathbf {1}}}-\mathbf {{\mathbf {G}}}(t))g_{\mathfrak {i}}(t)}. \end{aligned}$$
(11)

5 Conclusion

In the case of the system break down at \(\tau ^*\) under the above assumption, the probability that the system failure is caused by the failure of the component \({\mathfrak {i}}\) is given by

$$\begin{aligned} \int _{\tau ^*}^\infty \hat{f}_{\mathfrak {i}}(\mathbf {{\mathbf {1}}}-\mathbf {{\mathbf {G}}}(t))g_{\mathfrak {i}}(t)dt. \end{aligned}$$
(12)

The proposed rationalization capture in a mathematical model of important practical aspects, namely the formalization of the disorder moment for a complex system, uses in its essence a description of the associated observation (diagnostics) points in the language of cooperative games. However, the method in which the meaning of an individual system’s elements is determined in connection with their actual importance may be the preliminary stage of analyzing the reliability or readiness of the system. The initial stage would be to propose the right simple game for the method analyzed in this work.This phase of the construction of the model is not the subject of a detailed analysis in this work. The author believes that this step can be eliminated if we apply methods used in quitting games.

An important application of such an analysis is for the appropriate addition of honeypots [see Píbil et al. (2012)]. Planning structures with effective traps is easier when the mathematical model allows you to designate locations that lead to critical resources of a system from a security point of view. Particularly promising is the problem of determining important locations by determining the cost function of individual resources and their values for observers deployed in the system. Thanks to this formulation, the arbitrary structure of a simple game will be replaced by an analysis of the antagonistic game with possible cooperation [see Carpente et al. (2005), Herings and Predtetchinski (2014)].