Next Article in Journal
Adaptive Clustering for Point Cloud
Previous Article in Journal
Spectral Data Processing for Field-Scale Soil Organic Carbon Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Allocation of Eavesdropping Attacks for Multi-System Remote State Estimation

Shandong Key Laboratory of Industrial Control Technology, School of Automation, Qingdao University, Qingdao 266071, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(3), 850; https://doi.org/10.3390/s24030850
Submission received: 5 January 2024 / Revised: 23 January 2024 / Accepted: 26 January 2024 / Published: 28 January 2024
(This article belongs to the Section Sensor Networks)

Abstract

:
In recent years, the problem of cyber–physical systems’ remote state estimations under eavesdropping attacks have been a source of concern. Aiming at the existence of eavesdroppers in multi-system CPSs, the optimal attack energy allocation problem based on a SINR (signal-to-noise ratio) remote state estimation is studied. Assume that there are N sensors, and these sensors use a shared wireless communication channel to send their state measurements to the remote estimator. Due to the limited power, eavesdroppers can only attack M channels out of N channels at most. Our goal is to use the Markov decision processes (MDP) method to maximize the eavesdropper’s state estimation error, so as to determine the eavesdropper’s optimal attack allocation. We propose a backward induction algorithm which uses MDP to obtain the optimal attack energy allocation strategy. Compared with the traditional induction algorithm, this algorithm has lower computational cost. Finally, the numerical simulation results verify the correctness of the theoretical analysis.

1. Introduction

Cyber–physical systems (CPS) are considered to be among the revolutionary technologies due to the continuous technological breakthroughs and innovations in information technology and in the manufacturing industry [1]. CPS is a multidimensional and complex system that deeply integrates control, communication and computing (that is, 3C technology composed of control, communication and computing)and can realizes large-scale information acquisition and intelligent control of the physical world through the cognition, communication and control of physical objects, so that the network can monitor the specific actions of a physical entity in a real-time, reliable, remote and safe way [2,3]. CPS is widely used in aerospace, industrial production, advanced automobile systems, energy reserve, environmental monitoring, national defense and military, infrastructure construction, intelligent building, smart grids, transportation systems and telemedicine [4]. With the rapid development of network, computing, sensing and control systems, CPS technology is more and more widely used, and the emerging network attacks make the wireless CPS system very fragile, and the security of CPS becomes the primary consideration [5,6,7].
For the security issues of a system’s remote state estimation, there are many forms of malicious network attacks, but they are divided into three main and common categories: denial of service (DoS) attacks, integrity (including replay and false data injection) attacks and eavesdropping attacks [8]. DoS attacks are designed to interfere with wireless communication channels. This attack will lead to a significant decline in the estimation accuracy in CPS [9]. Peng [10] and Zhang [11] formulated the problem as a Markov decision process (MDP) problem to consider the optimal attack power allocation for remote state estimation in a multi-system. Integrity attacks can disrupt the transmitted data packets with stealth constraint [12,13]. In Ref. [14], an important scenario is designed from the attacker’s point of view, in which the false data injection attack can completely and secretly destroy CPS. In addition, the channel may be subject to eavesdropping attacks, which can lead to serious economic losses and even pose a threat to human survival by eavesdropping on personal privacy data [15,16]. For example, in the intelligent transportation system, eavesdroppers infer the path planning of vehicles by monitoring the location information, and on this basis, eavesdropping attacks will easily succeed [17,18]. In terms of existing research, data encryption is the main method to protect system privacy from eavesdropping attacks [19,20,21].
Recently, the issue of remote state estimation in the presence of eavesdroppers has attracted widespread attention from researchers. The attack types of eavesdropping attacks are divided into passive eavesdropping attacks and active eavesdropping attacks. Some estimation and control problems have been studied in the presence of active attacks. Han [22] studied the problem of active eavesdropping on fading channels, and proposed an interference-assisted eavesdropping method to improve the probability of successful monitoring. Yuan [23] constructed a two-person non-zero-sum game between the sensor and the active eavesdropper with the goal of minimizing the covariance of the self-estimated error and maximizing the covariance of the opponent’s estimated error. Ding [24] took the trade-off between stealth and eavesdropping performance as a constrained MDP, and proposed an optimal strategy for active eavesdropping.
The above literatures indicate a certain breakthrough in the design of active eavesdropping solutions. This paper mainly studies the passive attacks of eavesdroppers. Tsiamis [25] proposed a confidentiality mechanism for randomly hiding sensor information, and explored the trade-off between user utility and control theory confidentiality through optimization methods. Huang [26] proposed a new encryption strategy and considered the cost of the encryption process. Then, the optimal determinism of the encryption strategy and the existence of the Markov strategy in the finite time horizon are proven. Wang [27] theoretically proved that there are some structural properties in the optimal transmission scheduling for known and unknown eavesdropper estimation errors. In reference [28], the transmission scheduling strategy of remote state estimation systems with eavesdroppers on packet-dropping links was studied. Yuan [29] transformed the system model into MDP in order to obtain the optimal transmission scheduling to minimize the AoI of CPS and keep the AoI of eavesdroppers above a certain level, and proved that the optimal transmission scheduling strategy is a threshold behavior on the CPS and AoI of eavesdroppers, respectively. In [30], the proposed problem is formulated as a Stackelberg game, and the strategy of maximizing the secure transmission rate between sensor and controller in the presence of malicious eavesdroppers and disruptors is studied. On the basis of analyzing the influence of different strategies on eavesdropping performance, Zhou [31] studied the multi-output system and proposed a decryption scheduling scheme to minimize the expected estimation error under the condition of energy constraint.
Most of the existing literature studies the optimal transmission strategies of sensors from the remote estimator. Compared to [27,28], this paper studies the optimal attack energy allocation strategies from the perspective of eavesdroppers. Moreover, the previous literature mainly focuses on the situation that CPS has eavesdroppers in a single system and a finite time range, but does not pay too much attention to the situation when there are eavesdroppers in a multi-system and in an infinite time range. In this paper, the optimal attack allocation problem of remote state estimation in CPS with eavesdropping attacks in a multi-system in infinite time range is studied. Our goal is to maximize the state estimation error of the eavesdropper, so as to determine the optimal attack allocation of the eavesdropper. The contributions of this paper are as follows:
  • We propose a multi-system eavesdropping attack model based on channel SINR, which reveals the relationship between attack power and packet arrival rate.
  • In the infinite time horizon, under the condition of energy constraint, the optimal attack scheduling strategy is obtained by constructing MDP and using the Bellman equation.
  • Finally, according to the given algorithm, the optimal attack energy allocation strategy is obtained, and then it is verified by simulation experiments.
Notations: The entire paper uses the following symbols. N is the set of natural numbers. The n-dimensional Euclidean space is denoted by R n . S + n ( S + + n ) is the set of n by n positive semi-definite matricess (and positive definite matrices). T r ( X ) is the trace of a matrix X, and X T is the transpose of X and X 1 denotes the inverse of matrix X. X > 0 and X 0 represent that X is a positive definite matrix and positive semidefinite matrix, respectively. For functions g ˜ and h, g ˜ h ( x ) stands for the function composition g ˜ ( h ( x ) ) and h n ( x ) = h ( h n 1 ( x ) ) with h 0 ( x ) = x . E [ · ] indicates taking the expected value of ′·′. P [ · ] denotes the probability of ′·′.

2. Problem Setup

2.1. System Model

Figure 1 shows the system architecture. We consider N general discrete time-invariant stochastic system, which is given as follows
x i ( k + 1 ) = A i x i ( k ) + ω i ( k ) ,
y i ( k ) = C i x i ( k ) + v i ( k )
where k N is the time index and x i ( k ) R n and y i ( k ) R m refer to the state of the ith system and the system measurements vector taken by the sensor at time k, respectively. The process noise ω i ( k ) R n and the observation noise v i ( k ) R m are assumed to be independent and identically distributed (i.i.d.) Gaussian noises with zero-mean and the covariances matrix Q i 0 and matrix R i > 0 , respectively. The initial state of the ith system x i ( 0 ) is also a zero-mean Gaussian random variable independented ω i ( k ) and v i ( k ) with covariance F i ( 0 ) 0 . We also assume that the pair ( A i , C i ) is observable and ( A i , Q i ) is stabilizable. Assuming that the sensors in the system are intelligent sensors with certain computing powers, each sensor can first use the collected observation data to calculate the local state estimation, and then transmit the local state estimation value to the remote state estimator. Therefore, we use x ˜ i ( k ) and F ˜ i ( k ) to represent the ith sensor’s local minimum mean-squared error (MMSE) estimate of the state and the corresponding error covariance [32]:
x ˜ i ( k ) = E [ x i ( k ) | y i ( 1 ) , y i ( 2 ) , . . . , y i ( k ) ] , F ˜ i ( k ) = E [ ( x i ( k ) x ˜ i ( k ) ) ( x i ( k ) x ˜ i ( k ) ) T
| y i ( 1 ) , y i ( 2 ) , . . . , y i ( k ) ] ,
which can be calculated based on a standard Kalman filter:
x ˜ i ( k | k 1 ) = A i x ˜ i ( k 1 ) , F ˜ i ( k | k 1 ) = A i F ˜ i ( k 1 ) A i T + Q i , K i ( k ) = F ˜ i ( k | k 1 ) C i T [ C i F ˜ i ( k | k 1 ) C i T + R i ] 1 , x ˜ i ( k ) = A i x ˜ i ( k 1 ) + K i ( k ) ( y i ( k ) C i A i x ˜ i ( k 1 ) ) , F ˜ i ( k ) = ( I K i ( k ) C i ) F ˜ i ( k ) ,
where
x ˜ i ( k | k 1 ) = E [ x i ( k ) | y i ( 1 ) , y i ( 2 ) , . . . , y i ( k 1 ) ] , F ˜ i ( k | k 1 ) = E [ ( x i ( k ) x ˜ i ( k ) ) ( x i ( k ) x ˜ i ( k ) ) T | y i ( 1 ) , y i ( 2 ) , . . . , y i ( k 1 ) ] ,
K i ( k ) is the gain of the Kalman filter and the initial condition is x ˜ i ( 0 ) = 0 .
For ease of representation, we can also define the Lyapunov and Riccati operators h, g ˜ : S + n S + n as
h i ( X ) A i X A i T + Q i ,
g ˜ i ( X ) X X C i T [ C i X C i T ] 1 C i X ,
h i k ( X ) h i h i · · · h i h i k times ( X ) .
Under the assumptions of detectability and stability, it has been shown that the posterior estimation error covariance matrix of the Kalman filter converges exponentially from any initial condition to a unique value F ˜ , [33], i.e., F ˜ i ( k ) = F ˜ i , k 1 , which F ˜ i is the steady-state error covariance, which is determined by the unique positive semi-definite solution of g ˜ i h i ( X ) =X [34].
 Lemma 1
([35]). For 0 ξ 1 ξ 2 , it has
T r { F ˜ } T r { h i ξ 1 ( F ˜ ) } T r { h i ξ 2 ( F ˜ ) } .

2.2. Attack Model Based on SINR

To simulate random data loss due to fading and interference, we assume that the communication between the sensor and the remote estimator or the eavesdropper is via an Additive White Gaussian Noise (AWGN) channel using quadrature amplitude modulation (QAM). Data packets sent by the sensor are quantized and mapped to QAM symbols. Then, digital communication theory reveals the relationship between symbol error rate (SER) and SINR as follows [5,36]:
S E R i = 2 Q ( α S I N R i )
where Q ( x ) = 1 2 π x e t 2 / 2 d t , { e , a } and α > 0 is a parameter. = e represents the remote estimator side, = a indicates the eavesdropper side.
Considering the remote estimator side first, the channel SINR for the remote estimator at time k is [24],
S I N R i e = Φ i p i ( k ) σ i , e 2
where Φ i is the channel gain of the ith communication channel between the sensor and the remote estimator. p i ( k ) 0 is the transmission power for the QAM symbol used by sensor i at time k and σ i , e 2 is the AWGN power of the ith channel between the sensor and the remote estimator. Define a random variable ζ i ( k ) { 0 , 1 } as to whether the remote estimator successfully receives the information at time k, i.e.,
ζ i e ( k ) = 0 , otherwise ( regarded as dropout ) 1 , if x ˜ i ( k ) is successfully received by the estimator .
Then, the packet arrival rate of the remote estimator is
λ i e ( k ) = P { ζ i e ( k ) = 1 } = 1 S E R i e = 1 2 Q ( α Φ i p i ( k ) σ i , e 2 ) .
Secondly, considering the eavesdropper side, we can know that the SINR of the channel at the eavesdropper side at time k can be expressed as
S I N R i a = Ψ i p i ( k ) a i ( k ) + σ i , a 2
where Ψ i is the channel gain of the ith communication channel between the sensor and the eavesdropper. a i ( k ) indicates the attack power to the ith channel launched by the eavesdropper. σ i , a 2 is the AWGN power of the channel between the sensor and the eavesdropper. Similarly, we use a binary random variable ζ i a ( k ) to indicate whether the eavesdropper is successful in eavesdropping, i.e.,
ζ i a ( k ) = 0 , otherwise 1 , if x ˜ i ( k ) is successfully eavesdropped by the eavesdropper
Then, the probability that an eavesdropper can successfully eavesdrop is:
λ i a ( k ) = P { ζ i a ( k ) = 1 } = 1 S E R i a = 1 2 Q ( α Ψ i p i ( k ) a i ( k ) + σ i , a 2 ) .
Hypothetical processes ζ i e ( k ) and ζ i a ( k ) are independent of each other.

2.3. Remote State Estimation

With the local estimate received by the remote estimator, we can determine the MMSE state estimate x ^ i e ( k ) and the corresponding estimation error covariance F i e ( k ) of the remote estimator at time k, where x ^ i e ( k ) and F i e ( k ) are obtained by the following iterative process:
x ^ i e ( k ) = A i x ^ i e ( k 1 ) , otherwise , x ˜ i ( k ) , if ζ i e ( k ) = 1 ,
F i e ( k ) = h i ( F i e ( k 1 ) ) , otherwise , F i ˜ , if ζ i e ( k ) = 1 .
Similarly, denote x ^ i a ( k ) and F i e ( k ) as the MMSE state estimation and corresponding error covariance of the eavesdropper at time k, then x ^ i a ( k ) and F i a ( k ) can be expressed as
x ^ i a ( k ) = A i x ^ i a ( k 1 ) , otherwise , x ˜ i ( k ) , if ζ i a ( k ) = 1 ,
F i a ( k ) = h i ( F i a ( k 1 ) ) , otherwise , F i ˜ , if ζ i a ( k ) = 1 .
Therefore,
E [ F i e ( k ) ] = λ i e ( k ) F ˜ i + ( 1 λ i e ( k ) ) h i ( F i e ( k 1 ) ) ,
E [ F i a ( k ) ] = λ i a ( k ) F ˜ i + ( 1 λ i a ( k ) ) h i ( F i a ( k 1 ) ) .
Define S { F ˜ i , h i ( F ˜ i ) , h i 2 ( F ˜ i ) . . . } , it is composed of all possible values of F i e ( k ) and F i a ( k ) .

2.4. Problem Formulation

Specifically, we consider the following problem: from the perspective of the eavesdropper, in an infinite time horizon, the eavesdropper finds the optimal attack allocation under the condition of limited energy to maximize the state estimation error of the eavesdropper, i.e.,
 Problem 1.
  max θ J ( π ) ,
s . t . i = 1 N a i ( k ) a ¯
where π = { ( a 1 ( 1 ) , a 2 ( 1 ) , , a N ( 1 ) ) , ( a 1 ( 2 ) , a 2 ( 2 ) , , a N ( 2 ) ) , } is an admissible attack policy with the attack power using a time-step 1 , 2 , . . . . . a ¯ is an energy constraint for the attacker at each time. The infinite horizon average expectation of the remote estimation error covariance can be expressed by the following formula:
J ( π ) = lim inf T 1 T + 1 [ E k = 0 T i = 1 N t r F i a ( k ) ]

3. Optimal Attack Schedule

In this section, we formulate Problem 1 as a discrete time MDP to solve. In addition, we also give an algorithm for searching the optimal eavesdropping attack strategy.

3.1. MDP Formulation

For the convenience of notation, denote τ i e ( k ) (or τ i a ( k ) ) as the holding time from the estimator (or eavesdropper) to the continuous successful acquisition of data at time k, that is, the duration from the last successful transmission time to time k, which can be expressed by the following formula:
τ i e ( k ) = k max { k * : ζ i e ( k * ) = 1 , 0 k * k } ,
τ i a ( k ) = k max { k * : ζ i a ( k * ) = 1 , 0 k * k } .
Obviously, τ i e ( k ) S i e = N (or τ i a ( k ) S i a = N ), then we can get:
F i e ( k ) = h τ i e ( k ) ( F ˜ ) ,
F i a ( k ) = h τ i a ( k ) ( F ˜ ) .
Using MDP to describe the dynamic process of CPS under eavesdropping attacks, MDP is expressed mathematically as S , A , P , r ( · ) , and the specific elements are as follows.
State space: let s i ( k ) = ( τ i e ( k 1 ) , τ i a ( k 1 ) ) S i e × S i a = N 2 , where τ i e ( k 1 ) and τ i a ( k 1 ) can be considered as the state of process i at time k 1 at the remote estimator side and eavesdropper side, respectively. The state at time k is defined as s ( k ) = ( s 1 ( k ) , s 2 ( k ) , , s N ( k ) ) , and its value range is a countable state space S i S i e × S i a . Let S = { S 1 , , S N } .
Action space: we can know the action space is defined as A { a 1 ( k ) , a 2 ( k ) , , a N ( k ) } , where a i ( k ) ( 0 , a i ( 1 ) , a i ( 2 ) , , a i ( l i ) ) , i { 1 , 2 , , N } , a i ( l i ) is the maximum attack power to channel i. Thus, the action is a ( k ) { a 1 ( k ) , a 2 ( k ) , , a N ( k ) } A .
Transition probability: let the state transition introduction matrix at time k be P i ( s i ( k + 1 ) | s i ( k ) , a i ( k ) ) , which represents the probability of the state changing from s i ( k ) to s i ( k + 1 ) under action a i ( k ) , where s i ( k ) , s i ( k + 1 ) S , a i ( k ) A . For simplicity, let the state at time k be s i ( k ) = ( j 1 , j 2 ) . Then, the state transition probability matrix is as follows:
P i ( s i ( k + 1 ) | s i ( k ) , a i ( k ) ) = λ i e ( k ) λ i a ( k ) , if s i ( k + 1 ) = ( 0 , 0 ) , λ i e ( k ) ( 1 λ i a ( k ) ) , if s i ( k + 1 ) = ( 0 , j 2 + 1 ) , ( 1 λ i e ( k ) ) λ i a ( k ) , if s i ( k + 1 ) = ( j 1 + 1 , 0 ) , ( 1 λ i e ( k ) ) ( 1 λ i a ( k ) ) , if s i ( k + 1 ) = ( j 1 + 1 , j 2 + 1 ) .
Payoff functions: let r ( · ) be the immediate cost function and define it as:
r ( s ( k ) , a ( k ) ) i = 1 N T r ( F i a ( k ) )
Obviously, the single-stage reward at time k is independent of the action behavior and only depends on the current state.
Note that the random decision rule of the eavesdropper is a mixed strategy sequence π = { ( a 1 ( 1 ) , a 2 ( 1 ) , , a N ( 1 ) ) , ( a 1 ( 2 ) , a 2 ( 2 ) , , a N ( 2 ) ) , } , where π is the random kernel from H to A and definition Π is the set of all these feasible strategies. Based on the process state s ( k ) , the attacker chooses action a ( k ) = a ( s ( k ) ) , π = { ( a ( s ( 1 ) ) , , a ( s ( k ) ) , } . Then, for the initial state s ( 0 ) = s S , we can get the sum of expected reward r ( k ) following the action strategy π Π :
J ( s , π ) = lim inf T 1 T + 1 E [ k = 0 T r ( s ( k ) , a ( k ) ) ]
and its optimal value J * ( s ) is
J * ( s ) = arg max π Π J ( s , π ) .
Define the average value function under policy π Π as the function V: S R . Therefore, we can get the following theorem.
 Theorem 1.
According to the MDP theory, we can obtain the optimal value J * ( s ) by solving the following optimality (Bellman) equation:
J * ( s ) + V ( s ) = max a ( k ) A { r ( s , a ) + s ( k + 1 ) S i = 1 N P i ( s i ( k + 1 ) | s i ( k ) , a i ( k ) ) V ( s ( k + 1 ) ) }
where s = ( s 1 , , s N ) S is the initial state.
The optimal attack strategies of the eavesdropper is:
a * ( s ( k ) ) = arg max a ( k ) A { r ( s ( k ) , a ( k ) ) + s ( k + 1 ) S i = 1 N P i ( s i ( k + 1 ) | s i ( k ) , a i ( k ) ) J * ( s ( k + 1 ) ) }
Proof (Proof of Theorem 1). 
According to the eighth chapter in reference [37], Theorem 1 can be obtained by introducing our state transition probability matrix (27) and immediate cost function (28).
From Equation (8.4.2) in [37], we can get the following equation:
J * ( s ) + V ( s ) = max a ( k ) A { r π 1 + k = 1 i = 1 k 1 P π i r π k } = max a ( k ) A { r π 1 + P π 1 r π 2 + P π 1 P π 2 r π 3 + · · · } = max a ( k ) A { r π 1 + P π 1 [ r π 2 + P π 2 r π 3 + · · · ] } = max a ( k ) A { r π 1 + P π 1 V ( s ( k + 1 ) ) }
where π k is the strategy of the time k. r and P are abbreviations. Many decision rules are contained in historical strategies. So, r π 1 and P π 1 can be decomposed into the following formula:
r π 1 = r ( s , a ) , P π 1 = s ( k + 1 ) S i = 1 N P i ( s i ( k + 1 ) | s i ( k ) , a i ( k ) ) .
Therefore, we can get the following:
J * ( s ) + V ( s ) = max a ( k ) A { r π 1 + P π 1 V ( s ( k + 1 ) ) } = max a ( k ) A { r ( s , a ) + s ( k + 1 ) S i = 1 N P i ( s i ( k + 1 ) | s i ( k ) , a i ( k ) ) V ( s ( k + 1 ) ) }
Then, we rewrite the finite-horizon optimality Equation (4.5.1) in [37] as
a * ( s ( k ) ) = arg max a ( k ) A { r ( s ( k ) , a ( k ) ) + s ( k + 1 ) S i = 1 N P i ( s i ( k + 1 ) | s i ( k ) , a i ( k ) ) J * ( s ( k + 1 ) ) }
Thus, we can get the optimality (Bellman) Equation (31) and the optimal attack strategies of the eavesdropper (32). So, Theorem 1 is proved.    □
 Remark 1.
It should be noted that for finite MDP, the action a ( k ) taken at time k is non-stationary and depends on the current state at time k.
 Remark 2.
We can get the optimal attack energy allocation strategy of (29) by using the optimality (Bellman) Equation (31); in addition, the optimal strategy is statically deterministic, which helps us to find out the structural characteristics of the optimal allocation strategy.

3.2. Policy Iteration Algorithm

MDP proposed in this paper has infinite state space. However, according to the characteristics of state transition in the system model, we can find that when the eavesdropper’s attack energy is limited, the transition rule can effectively limit the system state in a limited time range. Therefore, in the MDP proposed in this paper, although it has infinite state space, we can treat it as an MDP with a finite time domain. This is convenient for us to design the algorithm of the optimal attack strategy.
In a finite time domain, the solution of the optimal equation is the optimal quality function from the decision time k to the decision time T at the end of the process. Based on the MDP problem constructed above, we provide a specific backward induction algorithm to solve it and provide the optimal attack strategy, i.e., Algorithm 1.
In Algorithm 1, we first calculate F ˜ i , the packet rate λ i e ( k ) , λ i a ( k ) and the hold time τ i e ( k ) , τ i a ( k ) in step 1 and calculate the state transition matrix P i in step 2. Then, in step 3, set all k = 0 and for all s ( k ) S , compute J * ( s ) + V ( s ) by (31). Next, in step 4, we set k k 1 , initialize s ( k ) = s ( k + T ) . In step 5, let s ( k ) S , compute J * ( s ) by (31) and a * ( s ( k ) ) by (32). We assess that the best action 1 is found for state s ( k ) , so in step 6, if a * ( s ( k ) ) = 1 , then for all s ( k + 1 ) S , let (34). After this, let s ( k ) s ( k + T ) , and go to Step 5. Otherwise, let s ( k ) s ( k + 1 ) , go to Step 5. Finally, in step 7, if k = 0 , output J T * ( s ) and π * = ( a * ( 0 ) , a * ( 1 ) , . . . , a * ( T 1 ) ) . Otherwise, go to Step 4.
Algorithm 1 Backward induction algorithm for optimal allocation strategy
Require: A i , C i , Q i , R i , p i ( k ) , σ e , σ a , a ¯ , T , S , A , s .
Ensure: The optimal value J T * ( s ) ; optimal deterministic Markov policy π *
Step 1: Calculate F ˜ i , the packet rate λ i e ( k ) , λ i a ( k ) and the holding time τ i e ( k ) , τ i a ( k )
Step 2: Calculate state transition matrix P i
Step 3: Set all k = 0 and for all s ( k ) S , compute J * ( s ) + V ( s ) by (31).
Step 4: Set k k 1 , initialize s ( k ) = s ( k + T ) .
Step 5: Let s ( k ) S , compute
J * ( s ) + V ( s ) = max a ( k ) A { r ( s , a ) + s ( k + 1 ) S i = 1 N P i ( s i ( k + 1 ) | s i ( k ) , a i ( k ) ) V ( s ( k + 1 ) ) }
a * ( s ( k ) ) = arg max a ( k ) A { r ( s ( k ) , a ( k ) ) + s ( k + 1 ) S i = 1 N P i ( s i ( k + 1 ) | s i ( k ) , a i ( k ) ) J * ( s ( k + 1 ) ) } .
Step 6: If a * ( s ( k ) ) = 1 , then, for all s ( k + 1 ) S , let
a * ( s ( k + 1 ) ) = 1 , J * ( s ( k + 1 ) ) + V ( s ( k + 1 ) ) = r ( s ( k + 1 ) , 1 ) + s ( k + 2 ) S i = 1 N P i ( s i ( k + 2 ) | s i ( k + 1 ) , 1 ) V ( s ( k + 2 ) )
 let s ( k ) s ( k + T ) , and go to Step 5. Otherwise, let s ( k ) s ( k + 1 ) , go to Step 5.
Step 7: If k = 0 , then output J T * ( s ) and π * = ( a * ( 0 ) , a * ( 1 ) , . . . , a * ( T 1 ) ) . Otherwise, go to Step 4.
 Remark 3.
In the above Algorithm 1, it is assumed that a * ( s ( k ) ) = 1 in order to reduce the calculation cost and complexity compared with the traditional algorithm.
 Remark 4.
We can derive the state estimation of the eavesdropper each time to ensure the feasibility of Algorithm 1.

4. Discussion and Illustrative Example

In this section, according to the above MDP model, we can provide some numerical simulations to show the optimal attack energy allocation strategy of problem 1. Consider systems ( 1 ) and ( 2 ) with β = 0.5 . The parameters of the systems and channels are shown in Table 1.
Suppose the energy constraint is a ¯ = 10 . In Table 1, a 1 ( 1 ) = 0 , a 1 ( 2 ) = 5 , a 1 ( 3 ) = 7 , a 1 ( 4 ) = 10 indicates that the possible maximum attack power of channel 1 is 0, 5, 7 and 10, respectively, and another 0 means that there is no attack. a 2 ( 1 ) = 0 , a 2 ( 2 ) = 3 , a 2 ( 3 ) = 5 , a 2 ( 4 ) = 10 in the same way. According to the number of existing systems and the number of channels that eavesdrop at the same time, we can divide them into the following three methods, such as numerical simulation. We use the Algorithm 1 to calculate the optimal attack energy allocation strategy and the optimal average return.

4.1. Single System

Let us first consider the case where there is only one system. When there is only one system, we consider the optimal attack energy allocation strategy when attacking this channel at different times. We can use the data in Table 1 about Sensor 1. The strategy set for the eavesdropper is { ( 0 , 0 ) , ( 10 , 0 ) , ( 0 , 10 ) , ( 5 , 5 ) } , where ( 0 , 0 ) means no attack, ( 10 , 0 ) means use energy 10 to attack at the first moment and not to attack at the second moment and ( 0 , 10 ) and ( 5 , 5 ) have a similar meaning. Assume that the transmission power of the sensor is p ( k ) = 0.6 . Through the calculation, the error covariance of the steady state estimation is obtained, i.e., F ˜ = 1.9755 . Assume that the AWGN power of the channel between the sensor and the estimator is σ e 2 = 0.3 , and the AWGN power of the channel between the sensor and the eavesdropper is σ a 2 = 0.5 . This paper studies the infinite time domain, but in order to simplify the calculation, we use the truncated set N 0 = { 0 , 1 , . . . , 16 } . The optimal strategy is calculated through Algorithm 1.
In Figure 2, we use τ e ( k ) and τ a ( k ) to represent the state of the optimal strategy and the optimal strategy is shown, where the purple and red symbols represent policy 1 (policy 1 = (0, 0)) and policy 2 (policy 2 = (0, 10)).

4.2. Dual System (Not Attacking or Attacking One Channel)

Consider that when there are two systems, neither of the two channels can attack or can only attack one channel.In the numerical solution, we consider the data in Table 1 and Table 2 and use a truncated set h ( i ) 35 ( F ˜ ( i ) ) . We can assume that the eavesdropper’s strategy set is { ( 0 , 0 ) , ( 5 , 0 ) , ( 7 , 0 ) , ( 10 , 0 ) , ( 0 , 3 ) , ( 0 , 5 ) , ( 0 , 10 ) } , i.e., Table 3. Where ( 0 , 0 ) indicates that there is no attack on both channels. ( 5 , 0 ) , ( 7 , 0 ) , ( 10 , 0 ) means that channel 1 is attacked with energy 5, 7 and 10, respectively, and channel 2 is not attacked; ( 0 , 3 ) , ( 0 , 5 ) , ( 0 , 10 ) has the same meaning. Through our calculations, we can get F ˜ 1 = 1.9755 , F ˜ 2 = 1.6805 . Φ i and Ψ i we take as random numbers.
The optimal attack energy allocation strategy when there are two systems, neither of which attack or only one attacks, is shown in Figure 3 and Figure 4. In Figure 3, with an increase of s 1 ( k ) , the optimal strategies cannot be clearly identified. Therefore, we use τ e ( k ) and τ a ( k ) in Figure 4 to represent the state of the optimal strategies. From Figure 4, we can see that the optimal action includes ( 0 , 10 ) and ( 10 , 0 ) .

4.3. Dual System (Not Attacking, Attacking One Channel or Attacking Two Channels)

Similarly, in this section, two systems are also considered, and we can consider three situations: no attack, only one channel attack and two channels attack at the same time. We can assume that the eavesdropper’s strategy set is { ( 0 , 0 ) , ( 5 , 0 ) , ( 10 , 0 ) , ( 0 , 5 ) , ( 0 , 10 ) , ( 5 , 5 ) , ( 7 , 3 ) } , i.e., Table 4. Where ( 0 , 0 ) indicates that there is no attack on both channels. ( 5 , 0 ) , ( 10 , 0 ) means that channel 1 is attacked with energy 5 and 10, respectively, and channel 2 is not attacked; ( 0 , 5 ) , ( 0 , 10 ) has the same meaning. ( 5 , 5 ) represents an attack on channel 1 and channel 2 using energy 5 and energy 5, respectively; ( 7 , 3 ) has the same meaning. In the numerical solution, we consider the data in Table 1 and Table 2 and use truncated set h ( i ) 35 ( F ˜ ( i ) ) . Φ i and Ψ i we take as random numbers. Through our calculations, we can get F ˜ 1 = 1.9755 , F ˜ 2 = 1.6805 .
Figure 5 and Figure 6 show the optimal attack energy allocation strategy when there are two systems, in which two channels are not attacked, only one channel is attacked and both channels are attacked. In Figure 5, with an increase of s 1 ( k ) , the optimal strategies cannot be clearly identified. Therefore, we use τ e ( k ) and τ a ( k ) in Figure 6 to represent the state of the optimal strategies. From Figure 6, we can see that the optimal action includes ( 0 , 10 ) , ( 5 , 5 ) , ( 7 , 3 ) and ( 10 , 0 ) .

5. Conclusions

The optimal attack energy allocation of multi-system remote state estimation in CPS is studied when eavesdroppers exist in an infinite time domain. Based on the channel SINR, a wireless communication model is constructed. When the eavesdropper’s energy is limited, the optimal value of the eavesdropper and the optimal attack energy allocation strategy are found by using MDP theory. Finally, the research results are numerically simulated. According to the theoretical analysis and the numerical analysis, we can draw the following conclusions: in a multi-system, the optimal energy allocation strategy is to choose a higher attack energy to attack the channel when the estimation error covariance at the eavesdropper is large, and choose a lower attack energy to attack the channel when the estimation error covariance is small. And, we can see that the optimal attack strategy has an obvious threshold structure. In the future, we will prove the threshold structure of the optimal allocation strategy and study the situation when the detector exists.

Author Contributions

Conceptualization, X.C. and L.P.; methodology, X.C.; software, X.C. and L.P.; validation, X.C., S.Z. and L.P.; formal analysis, S.Z.; investigation, X.C.; writing—original draft preparation, X.C.; writing—review and editing, X.C. and L.P.; visualization, L.P.; supervision, S.Z.; project administration, L.P.; funding acquisition, L.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Science Foundation of Shandong Province (No. ZR2023MF023), Natural Science Foundation postdoctoral project of Qingdao of China (No. QDBSH20230102058), Taishan Scholars Project of Shandong Province of China (No. tstp20230624), and the Science and Technology Plan for Youth Innovation of Universities in Shandong Province (No. 2022KJ301).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CPSCyber-physical systems
DoSdenial of service
SINRsignal-to-noise ratio
MDPmarkov decision process
MMSEminimum mean-squared error
AWGNAdditive White Gaussian Noise
QAMquadrature amplitude modulation
AoIage of information

References

  1. Gunes, V.; Peter, S.; Givargis, T.; Vahid, F. A Survey on Concepts, Applications, and Challenges in Cyber-Physical Systems. KSII Trans. Int. Inf. Syst. 2014, 8, 4242–4268. [Google Scholar] [CrossRef]
  2. Peng, L.; Cao, X.; Sun, C. Optimal Transmit Power Allocation for an Energy-Harvesting Sensor in Wireless Cyber-Physical Systems. IEEE Trans. Cybern. 2021, 51, 779–788. [Google Scholar] [CrossRef]
  3. Kim, K.D.; Kumar, P.R. Cyber-Physical Systems: A Perspective at the Centennial. Proc. IEEE 2012, 100, 1287–1308. [Google Scholar] [CrossRef]
  4. Wei, J.; Ye, D. Multisensor Scheduling for Remote State Estimation Over a Temporally Correlated Channel. IEEE Trans. Ind. Inform. 2023, 19, 800–808. [Google Scholar] [CrossRef]
  5. Li, Y.; Quevedo, D.E.; Dey, S.; Shi, L. SINR-Based DoS Attack on Remote State Estimation: A Game-Theoretic Approach. IEEE Trans. Control Netw. Syst. 2017, 4, 632–642. [Google Scholar] [CrossRef]
  6. Zhao, L.; Li, W. Co-Design of Dual Security Control and Communication for Nonlinear CPS Under DoS Attack. IEEE Access 2020, 8, 19271–19285. [Google Scholar] [CrossRef]
  7. Li, T.; Chen, B.; Yu, L.; Zhang, W.A. Active Security Control Approach Against DoS Attacks in Cyber-Physical Systems. IEEE Trans. Autom. Control 2021, 66, 4303–4310. [Google Scholar] [CrossRef]
  8. Wang, L.; Cao, X.; Sun, C. Optimal Offline Privacy Schedule for Remote State Estimation under an Eavesdropper. In Proceedings of the 2019 34rd Youth Academic Annual Conference of Chinese Association of Automation (YAC), Jinzhou, China, 6–8 June 2019; pp. 676–682. [Google Scholar] [CrossRef]
  9. Zhu, F.; Ding, S.; Zuo, Y.; Peng, L. Distributed Robust Filtering for Cyber-Physical Systems under Periodic Denial-of-Service Jamming Attacks. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2022, 236, 1890–1907. [Google Scholar] [CrossRef]
  10. Peng, L.; Shi, L.; Cao, X.; Sun, C. Optimal Attack Energy Allocation against Remote State Estimation. IEEE Trans. Autom. Control 2018, 63, 2199–2205. [Google Scholar] [CrossRef]
  11. Zhang, S.; Peng, L.; Chang, X. Optimal Energy Allocation Based on SINR under DoS Attack. Neurocomputing 2024, 570, 127126. [Google Scholar] [CrossRef]
  12. Mo, Y.; Weerakkody, S.; Sinopoli, B. Physical Authentication of Control Systems: Designing Watermarked Control Inputs to Detect Counterfeit Sensor Outputs. IEEE Control Syst. Mag. 2015, 35, 93–109. [Google Scholar] [CrossRef]
  13. An, D.; Zhang, F.; Yang, Q.; Zhang, C. Data Integrity Attack in Dynamic State Estimation of Smart Grid: Attack Model and Countermeasures. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1631–1644. [Google Scholar] [CrossRef]
  14. Zhang, T.Y.; Ye, D. False Data Injection Attacks with Complete Stealthiness in Cyber-Physical Systems: A Self-Generated Approach. Automatica 2020, 120, 109117. [Google Scholar] [CrossRef]
  15. Xu, Q.; Zhang, Z.; Yan, Y.; Xia, C. Security and Privacy with k-step Opacity for Finite Automata Via A Novel Algebraic Approach. IEEE Trans. Inst. Meas. Control 2021, 43, 3606–3614. [Google Scholar] [CrossRef]
  16. Yuan, Y.; Zhang, P.; Guo, L.; Yang, H. Towards Quantifying the Impact of randomly Occurred Attacks on A Class of Networked Control Systems. J. Frankl. Inst. 2021, 354, 4966–4988. [Google Scholar] [CrossRef]
  17. Wang, K.; Yuan, L.; Miyazaki, T.; Chen, Y.; Zhang, Y. Jamming and Eavesdropping Defense in Green Cyber-Physical Transportation Systems Using a Stackelberg Game. IEEE Trans. Ind. Inform. 2018, 4, 4232–4242. [Google Scholar] [CrossRef]
  18. Zhang, P.; Zhou, M.; Fortino, G. Security and Trust Issues in Fog Computing: A Survey. Future Gener. Comput. Syst. 2018, 88, 16–27. [Google Scholar] [CrossRef]
  19. Bayat-Sarmadi, S.; Kermani, M.M.; Azarderakhsh, R.; Lee, C.Y. Dual-Basis Superserial Multipliers for Secure Applications and Lightweight Cryptographic Architectures. IEEE Trans. Circuits Syst. II Express Briefs 2014, 61, 125–129. [Google Scholar] [CrossRef]
  20. Chen, Y.J.; Wang, L.C.; Liao, C.H. Eavesdropping Prevention for Network Coding Encrypted Cloud Storage Systems. IEEE Trans. Parallel Distrib. Syst. 2016, 27, 2261–2273. [Google Scholar] [CrossRef]
  21. Yang, H.; Jiang, G.P. Reference-Modulated DCSK: A Novel Chaotic Communication Scheme. IEEE Trans. Circuits Syst. II Express Briefs 2013, 60, 232–236. [Google Scholar] [CrossRef]
  22. Han, Y.; Duan, L.; Zhang, R. Jamming-Assisted Eavesdropping Over Parallel Fading Channels. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2486–2499. [Google Scholar] [CrossRef]
  23. Yuan, H.; Xia, Y.; Yuan, Y.; Yang, H. Resilient Strategy Design for Cyber-Physical System under Active Eavesdropping Attack. J. Frankl. Inst. 2021, 358, 5281–5304. [Google Scholar] [CrossRef]
  24. Ding, K.; Ren, X.; Leong, A.S.; Quevedo, D.E.; Shi, L. Remote State Estimation in the Presence of an Active Eavesdropper. IEEE Trans. Autom. Control 2021, 66, 229–244. [Google Scholar] [CrossRef]
  25. Tsiamis, A.; Gatsis, K.; Pappas, G.J. State Estimation with Secrecyagainst Eavesdroppers. IFAC-PapersOnLine 2017, 50, 8385–8392. [Google Scholar] [CrossRef]
  26. Huang, L.; Leong, A.S.; Quevedo, D.E.; Shi, L. Finite Time Encryption Schedule in the Presence of an Eavesdropper with Operation Cost. In Proceedings of the 2019 American Control Conference (ACC), Philadelphia, PA, USA, 10–12 July 2019; pp. 4063–4068. [Google Scholar] [CrossRef]
  27. Wang, L.; Cao, X.; Sun, B.; Zhang, H.; Sun, C. Optimal Schedule of Secure Transmissions for Remote State Estimation Against Eavesdropping. IEEE Trans. Ind. Inform. 2021, 17, 1987–1997. [Google Scholar] [CrossRef]
  28. Leong, A.S.; Quevedo, D.E.; Dolz, D.; Dey, S. Transmission Scheduling for Remote State Estimation Over Packet Dropping Links in the Presence of an Eavesdropper. IEEE Trans. Autom. Control 2021, 64, 3732–3739. [Google Scholar] [CrossRef]
  29. Yuan, F.; Tang, S.; Liu, D. AoI-Based Transmission Scheduling for Cyber-Physical Systems Over Fading Channel Against Eavesdropping. IEEE Internet Things J. 2024, 11, 5455–5466. [Google Scholar] [CrossRef]
  30. Yuan, L.; Wang, K.; Miyazaki, T.; Guo, S.; Wu, M. Optimal transmission strategy for sensors to defend against eavesdropping and jamming attacks. In Proceedings of the 2017 IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017; pp. 1–6. [Google Scholar] [CrossRef]
  31. Zhou, J.; Luo, Y.; Liu, Y.; Yang, W. Eavesdropping Strategies for Remote State Estimation Under Communication Constraints. IEEE Trans. Inf. Forensics Secur. 2023, 18, 2250–2261. [Google Scholar] [CrossRef]
  32. Sun, S.L.; Deng, Z.L. Multi-Sensor Optimal Information Fusion Kalman Filter. Automatica 2004, 40, 1017–1023. [Google Scholar] [CrossRef]
  33. Sinopoli, B.; Schenato, L.; Franceschetti, M.; Poolla, K.; Jordan, M.I.; Sastry, S.S. Kalman Filtering with Intermittent Observations. IEEE Trans. Autom. Control 2004, 9, 1453–1464. [Google Scholar] [CrossRef]
  34. Shi, L.; Cheng, P.; Chen, J. Sensor Data Scheduling for Optimal State Estimation with Communication Energy Constraint. Automatica 2011, 47, 1693–1698. [Google Scholar] [CrossRef]
  35. Wu, S.; Ding, K.; Cheng, P.; Shi, L. Optimal Scheduling of Multiple Sensors over Lossy and Bandwidth Limited Channels. IEEE Trans. Control Netw. Syst. 2020, 7, 1188–1200. [Google Scholar] [CrossRef]
  36. Zhang, H.; Qi, Y.; Wu, J.; Fu, L.; He, L. DoS Attack Energy Management Against Remote State Estimation. IEEE Trans. Control Netw. Syst. 2018, 5, 383–394. [Google Scholar] [CrossRef]
  37. Puterman, M.L. Markov Decision Processes: Discrete Stochastic Dynamic Programming; John Wiley and Sons Publishing House: New York, NY, USA, 2005; pp. 354–358. [Google Scholar]
Figure 1. System architecture.
Figure 1. System architecture.
Sensors 24 00850 g001
Figure 2. A single system’s optimal energy allocation.
Figure 2. A single system’s optimal energy allocation.
Sensors 24 00850 g002
Figure 3. Optimal action of state s = ( s 1 , s 2 ) (not attacking or attacking one channel), where the blue stars and red circles represent the actions ( 0 , 10 ) and ( 10 , 0 ) , respectively.
Figure 3. Optimal action of state s = ( s 1 , s 2 ) (not attacking or attacking one channel), where the blue stars and red circles represent the actions ( 0 , 10 ) and ( 10 , 0 ) , respectively.
Sensors 24 00850 g003
Figure 4. Optimal action of ( τ e ( k ) , τ a ( k ) ). The meaning of circles and stars is the same as in Figure 3.
Figure 4. Optimal action of ( τ e ( k ) , τ a ( k ) ). The meaning of circles and stars is the same as in Figure 3.
Sensors 24 00850 g004
Figure 5. Optimal action of state s = ( s 1 , s 2 ) (not attacking, attacking one channel or attacking two channels), where the blue stars, green triangles, black pentagrams and red circles represent the actions ( 0 , 10 ) , ( 5 , 5 ) , ( 7 , 3 ) and ( 10 , 0 ) , respectively.
Figure 5. Optimal action of state s = ( s 1 , s 2 ) (not attacking, attacking one channel or attacking two channels), where the blue stars, green triangles, black pentagrams and red circles represent the actions ( 0 , 10 ) , ( 5 , 5 ) , ( 7 , 3 ) and ( 10 , 0 ) , respectively.
Sensors 24 00850 g005
Figure 6. Optimal action of ( τ e ( k ) , τ a ( k ) ). The meaning of circles, triangles, pentagrams and stars is the same as in Figure 5.
Figure 6. Optimal action of ( τ e ( k ) , τ a ( k ) ). The meaning of circles, triangles, pentagrams and stars is the same as in Figure 5.
Sensors 24 00850 g006
Table 1. Parameters for sensors and attack power.
Table 1. Parameters for sensors and attack power.
Sensor 1Sensor 2
Process 1Attack PowerProcess 2Attack Power
A 1 C 1 Q 1 R 1 a 1 ( 1 ) a 1 ( 2 ) a 1 ( 3 ) a 1 ( 4 ) A 2 C 2 Q 2 R 2 a 2 ( 1 ) a 2 ( 2 ) a 2 ( 3 ) a 2 ( 4 )
1.81.50.81057101.21.00.90.803510
Table 2. AWGN power and Transmission power.
Table 2. AWGN power and Transmission power.
AWGN PowerTransmission Power
σ e 2 σ a 2 p 1 ( k ) p 2 ( k )
0.10.210.8
Table 3. Attack power levels. Dual system (not attacking or attacking one channel).
Table 3. Attack power levels. Dual system (not attacking or attacking one channel).
Not AttackTo Channel 1To Channel 2
( a 1 ( 1 ) , 0 ) ( a 1 ( 2 ) , 0 ) ( a 1 ( 3 ) , 0 ) ( 0 , a 2 ( 1 ) ) ( 0 , a 2 ( 2 ) ) ( 0 , a 2 ( 3 ) )
(0,0) ( 5 , 0 ) ( 7 , 0 ) ( 10 , 0 ) ( 0 , 3 ) ( 0 , 5 ) ( 0 , 10 )
Table 4. Attack power levels. Dual system (not attacking, attacking one channel or attacking two channels).
Table 4. Attack power levels. Dual system (not attacking, attacking one channel or attacking two channels).
Not AttackAttacking One ChannelAttacking Two Channels
(0,0)To Channel 1To Channel 2
( a 1 ( 1 ) , 0 ) ( a 1 ( 3 ) , 0 ) ( 0 , a 2 ( 2 ) ) ( 0 , a 2 ( 3 ) ) ( a 1 ( 1 ) , a 2 ( 2 ) ) ( a 1 ( 2 ) , a 2 ( 1 ) )
( 5 , 0 ) ( 10 , 0 ) ( 0 , 5 ) ( 0 , 10 ) ( 5 , 5 ) ( 7 , 3 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chang, X.; Peng, L.; Zhang, S. Allocation of Eavesdropping Attacks for Multi-System Remote State Estimation. Sensors 2024, 24, 850. https://doi.org/10.3390/s24030850

AMA Style

Chang X, Peng L, Zhang S. Allocation of Eavesdropping Attacks for Multi-System Remote State Estimation. Sensors. 2024; 24(3):850. https://doi.org/10.3390/s24030850

Chicago/Turabian Style

Chang, Xiaoyan, Lianghong Peng, and Suzhen Zhang. 2024. "Allocation of Eavesdropping Attacks for Multi-System Remote State Estimation" Sensors 24, no. 3: 850. https://doi.org/10.3390/s24030850

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop