This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Constrained Attack-Resilient Estimation of Stochastic Cyber-Physical Systems

Wenbin Wan wenbinw2@illinois.edu    Hunmin Kim kim_h@mercer.edu    Naira Hovakimyan nhovakim@illinois.edu    Petros Voulgaris pvoulgaris@unr.edu University of Illinois at Urbana-Champaign, USA Mercer University, USA University of Nevada, Reno, USA
Abstract

In this paper, a constrained attack-resilient estimation algorithm (CARE) is developed for stochastic cyber-physical systems. The proposed CARE can simultaneously estimate the compromised system states and attack signals. It has improved estimation accuracy and attack detection performance when physical constraints and operational limitations are available. In particular, CARE is designed for simultaneous input and state estimation that provides minimum-variance unbiased estimates, and these estimates are projected onto the constrained space restricted by inequality constraints subsequently. We prove that the estimation errors and their covariances from CARE are less than those from unconstrained algorithms and confirm that this property can further reduce the false negative rate in attack detection. We show that estimation errors of CARE are practically exponentially stable in mean square. Finally, an illustrative example of attacks on a vehicle is given to demonstrate the improved estimation accuracy and detection performance compared to an existing unconstrained algorithm.

keywords:
Detection, Kalman filtering, Recursive estimation, Stability, State estimation
journal:
\cortext

[cor1]Corresponding author.

1 Introduction

Cyber-Physical Systems (CPS) play a vital role in the metabolism of applications from large-scale industrial systems to critical infrastructures, such as smart grids, transportation networks, precision agriculture, and industrial control systems rajkumar2010cyber . Recent developments in CPS and their safety-critical applications have led to a renewed interest in CPS security. The interaction between information technology and the physical system has made control components of CPS vulnerable to malicious attacks cardenas2008research . Recent cases of CPS attacks have clearly illustrated their susceptibility and raised awareness of the security challenges in these systems. These include attacks on large-scale critical infrastructures, such as the German steel mill cyber attack lee2014german , and Maroochy Water breach slay2007lessons . Similarly, malicious attacks on avionics and automotive vehicles have been reported, such as the U.S. drone RQ-170 captured in Iran peterson2011iran , and disabling the brakes and stopping the engine on civilian cars koscher2010experimental ; checkoway2011comprehensive .

Related work

Traditionally, most works in the field of attack detection had only focused on monitoring the cyber-space misbehavior raiyn2014survey . With the emergence of CPS, it becomes vitally important to monitor physical misbehavior as well because the impact of the attack on physical systems also needs to be addressed cardenas2008secure . In the last decade, attention has been drawn from the perspective of the control theory that exploits some prior information on the system dynamics for detection and attack-resilient control. For instance, a unified modeling framework for CPS and attacks is proposed in pasqualetti2013attack . A typical control architecture for the networked system under both cyber and physical attacks is proposed in teixeira2012attack ; then attack scenarios, such as Denial-of-Service (DoS) and false-data injection (FDI) are analyzed using this control architecture in teixeira2015secure .

In recent years, model-based detection has been tremendously studied. Attack detection has been formulated as an 0\ell_{0}/\ell_{\infty} optimization problem, which is NP-hard pajic2014robustness . A convex relaxation has been studied in fawzi2014secure . Furthermore, the worst-case estimation error has been analyzed in pajic2017attack . Multirate sampled data controllers have been studied to guarantee detectability in NAGHNAEIAN201912 and to detect zero-dynamic attacks in 8796181 . A residual-based detector has been designed for power systems against false-data injection attacks, and the impact of attacks has been analyzed in liu2011false .

In addition, some papers have studied active detection, such as mo2009secure ; mo2014detecting , where the control input is watermarked with a pre-designed scheme that sacrifices optimality. The aforementioned methods have the problem that the state estimate is not resilient concerning the attack signal, and incorrect state estimates make it more challenging for defenders to react to malicious attacks consequently.

Attack-resilient estimation and detection problems have been studied to address the above challenge in yong2015resilients ; forti2016bayesian ; kim2020simultaneous , where attack detection has been formulated as a simultaneous input and state estimation problem, and the minimum-variance unbiased estimation technique has been applied. More specifically, the approach has been applied to linear stochastic systems in yong2015resilients , stochastic random set methods in forti2016bayesian , and nonlinear systems in kim2020simultaneous . These detection algorithms rely on statistical thresholds, such as the χ2\chi^{2} test, which is widely used in attack detection mo2014detecting ; teixeira2010cyber . Since the detection accuracy improves when the covariance decreases, a smaller covariance is desired.

On top of the minimum-variance estimation approach, the covariance can be further reduced when we incorporate the information of the input and state in terms of constraints. There have been several investigations on Kalman filtering with state constraints simon2002kalman ; ko2007state ; simon2010kalman ; kong2021kalman . The state constraints are induced by unmodeled dynamics and operational processes. Some of these examples include vision-aided inertial navigation mourikis2007multi , target tracking wang2002filtering and power systems yong2015simultaneous . Constraints on inputs are also considered, such as avoiding reachable dangerous states under the assumption that the attack input is constrained kafash2018constraining and designing a resilient controller based on the partial knowledge of the attacker in terms of inequality constraints djouadi2015finite . The methods in kafash2018constraining ; djouadi2015finite can efficiently be used to maneuver a class of attacks when input inequality constraints are available but cannot resiliently address the estimation problem due to the false-data injection. This problem remains to be solved with a stability guarantee in the presence of inequality constraints. In the current paper, we aim to solve the resilient estimation problem and investigate the stability and performance of the algorithm design that integrates with information aggregation. To the best of our knowledge, this is the first investigation that considers both state and input inequality constraints for attack-resilient estimation with guaranteed stability.

Contributions

Our main contributions of this work can be summarized as follows. i) We propose a constrained attack-resilient estimation algorithm (CARE) that can estimate the compromised system states and the attack signals simultaneously. CARE first provides minimum-variance unbiased estimates, and then they are projected onto the constrained space induced by information aggregation. ii) The proposed CARE has better estimation performance. We show that the projection strictly reduces the estimation errors and covariances. iii) We are the first to investigate the stability of the estimation algorithm with inequality constraints and prove that the estimation errors are practically exponentially stable in mean square. iv) The proposed CARE has better attack detection performance. We provide rigorous analysis that the false negative rate is reduced by using the proposed algorithm. v) The proposed algorithm is compared with the state-of-the-art method to show the improved estimation and attack detection performance.

Paper organization

The rest of the paper is organized as follows. Section 2 introduces notations, χ2\chi^{2} test for detection, and problem formulation. The high-level idea of the proposed algorithm is presented in Section 3.1. Section 3.2 gives a detailed algorithm derivation. Section 4 demonstrates the performance improvement and investigates the stability analysis of the proposed algorithm. Section 5 presents an illustrative example of vehicle attacks. Finally, Section 6 draws the conclusion.

2 Preliminaries

Notations

We use the subscript kk to denote the time index. For a real set {\mathbb{R}}, +n{\mathbb{R}}^{n}_{+} denotes the set of positive elements in the nn-dimensional Euclidean space and n×m{\mathbb{R}}^{n\times m} denotes the set of all n×mn\times m real matrices. For a matrix 𝑨{\bm{A}}, 𝑨{\bm{A}}^{\top}, 𝑨1{\bm{A}}^{-1}, 𝑨{\bm{A}}^{\dagger}, diag(𝑨)\operatorname*{\textit{diag}}({\bm{A}}), tr(𝑨)\operatorname*{tr}({\bm{A}}) and rank(𝑨)\operatorname*{rank}({\bm{A}}) denote the transpose, inverse, Moore-Penrose pseudoinverse, diagonal, trace and rank of 𝑨{\bm{A}}, respectively. For a symmetric matrix 𝑺{\bm{S}}, 𝑺>0{\bm{S}}>0 (𝑺0{\bm{S}}\geq 0) indicates that 𝑺{\bm{S}} is positive (semi)definite. The matrix 𝑰{\bm{I}} denotes the identity matrix with an appropriate dimension. We use \|\cdot\| to denote the standard Euclidean norm for vector or an induced matrix norm if it is not specified, 𝔼[]{\mathbb{E}}[\,\cdot\,] to denote the expectation operator, and ×\times to denote matrix multiplication when the multiplied terms are in different lines. For a vector 𝒂{\bm{a}}, 𝒂(i){\bm{a}}(i) denotes the ithi^{th} element in the vector 𝒂{\bm{a}}. Finally, vectors 𝒂{\bm{a}}, 𝒂^\hat{{\bm{a}}}, 𝒂~𝒂𝒂^\tilde{{\bm{a}}}\triangleq{\bm{a}}-\hat{{\bm{a}}} denote the ground truth, estimate and estimation error of 𝒂{\bm{a}}, respectively.

χ2\chi^{2} Test for Detection

Given a sample of Gaussian random variable 𝝈^k\hat{\bm{\sigma}}_{k} with unknown mean 𝝈k\bm{\sigma}_{k} and known covariance 𝚺k\bm{\Sigma}_{k}, the χ2\chi^{2} test provides statistical evidence of whether 𝝈k=0\bm{\sigma}_{k}=0 or not. In particular, the sample 𝝈^k\hat{\bm{\sigma}}_{k} is being normalized by 𝝈^k𝚺k1𝝈^k\hat{\bm{\sigma}}_{k}^{\top}\bm{\Sigma}_{k}^{-1}\hat{\bm{\sigma}}_{k}, and we compare the normalized value with χdf2(α)\chi_{df}^{2}(\alpha), where χdf2(α)\chi_{df}^{2}(\alpha) is the χ2\chi^{2} value with degree of freedom dfdf and statistical significance level α\alpha. We reject the null hypothesis H0H_{0}: 𝝈k=0\bm{\sigma}_{k}=0, if 𝝈^k𝚺k1𝝈^k>χdf2(α)\hat{\bm{\sigma}}_{k}^{\top}\bm{\Sigma}_{k}^{-1}\hat{\bm{\sigma}}_{k}>\chi^{2}_{df}(\alpha), and accept alternative hypothesis H1H_{1}: 𝝈k0\bm{\sigma}_{k}\neq 0, i.e., there is significant statistical evidence that 𝝈k\bm{\sigma}_{k} is non-zero. Otherwise, we accept H0H_{0}, i.e., there is no significant evidence that 𝝈k\bm{\sigma}_{k} is non-zero.

False negative rate

Given a set of vectors {𝝈k}\{\bm{\sigma}_{k}\}, the false negative rate of the χ2\chi^{2} test is defined as the ratio of the number of false negative test results NnegN_{neg} and the number of non-zero vectors in the given set N𝝈k0N_{\bm{\sigma}_{k}\neq 0}

Fneg({𝝈^k},{𝚺k})\displaystyle F_{neg}(\{\hat{\bm{\sigma}}_{k}\},\{\bm{\Sigma}_{k}\}) NnegN𝝈k0=k(𝟏k)N𝝈k0,\displaystyle\triangleq\frac{N_{neg}}{N_{\bm{\sigma}_{k}\neq 0}}=\frac{\sum_{k}(\bm{1}_{k})}{N_{\bm{\sigma}_{k}\neq 0}}, (1)

where

𝟏k{1,if 𝝈^k𝚺k1𝝈^kχdf2(α) and 𝝈k00,otherwise.\displaystyle\bm{1}_{k}\triangleq\begin{cases}1,&\textit{if }\hat{\bm{\sigma}}_{k}^{\top}\bm{\Sigma}_{k}^{-1}\hat{\bm{\sigma}}_{k}\leq\chi^{2}_{df}(\alpha)\textit{ and }\bm{\sigma}_{k}\neq 0\\ 0,&\textit{otherwise}\end{cases}. (2)

Problem Formulation

Consider the following linear time-varying (LTV) discrete-time stochastic system111 The current paper considers a general formulation for the attack input matrix 𝑮k{\bm{G}}_{k}. If 𝒅k{\bm{d}}_{k} is injected into the control input, then 𝑮k=𝑩k{\bm{G}}_{k}={\bm{B}}_{k}. If 𝒅k{\bm{d}}_{k} is directly injected into the system, then 𝑮k=𝑰{\bm{G}}_{k}={\bm{I}}.

𝒙k+1\displaystyle{\bm{x}}_{k+1} =𝑨k𝒙k+𝑩k𝒖k+𝑮k𝒅k+𝒘k\displaystyle={\bm{A}}_{k}{\bm{x}}_{k}+{\bm{B}}_{k}{\bm{u}}_{k}+{\bm{G}}_{k}{\bm{d}}_{k}+{\bm{w}}_{k} (3a)
𝒚k\displaystyle{\bm{y}}_{k} =𝑪k𝒙k+𝒗k,\displaystyle={\bm{C}}_{k}{\bm{x}}_{k}+{\bm{v}}_{k}, (3b)

where 𝒙km{\bm{x}}_{k}\in\mathbb{R}^{m}, 𝒖kn{\bm{u}}_{k}\in\mathbb{R}^{n} and 𝒚kny{\bm{y}}_{k}\in\mathbb{R}^{n_{y}} are the state, the control input and the sensor measurement, respectively. The attack signal is modeled as a simultaneous input 𝒅knd{\bm{d}}_{k}\in\mathbb{R}^{n_{d}}, which is unknown to the defender. System matrices 𝑨k{\bm{A}}_{k}, 𝑩k{\bm{B}}_{k}, 𝑪k{\bm{C}}_{k} and 𝑮k{\bm{G}}_{k} are known and bounded with appropriate dimensions. We assume that rank(𝑪k𝑮k1)=nd\operatorname*{rank}({\bm{C}}_{k}{\bm{G}}_{k-1})=n_{d}, 0ndny0\leq n_{d}\leq n_{y}. This is typical assumption as in yong2016unified ; kim2017attack . The interpretation of this assumption is that the impact of the attack 𝒅k1{\bm{d}}_{k-1} on the system dynamics can be observed by 𝒚k{\bm{y}}_{k}. The process noise 𝒘k{\bm{w}}_{k} and the measurement noise 𝒗k{\bm{v}}_{k} are assumed to be i.i.d. Gaussian random variables with zero means and covariances 𝑸k𝔼[𝒘k𝒘k]0{\bm{Q}}_{k}\triangleq\mathbb{E}[{\bm{w}}_{k}{\bm{w}}_{k}^{\top}]\geq 0 and 𝑹k𝔼[𝒗k𝒗k]>0{\bm{R}}_{k}\triangleq\mathbb{E}[{\bm{v}}_{k}{\bm{v}}_{k}^{\top}]>0. Moreover, the measurement noise 𝒗k{\bm{v}}_{k}, the process noise 𝒘k{\bm{w}}_{k}, and the initial state 𝒙0{\bm{x}}_{0} are uncorrelated with each other.

The adopted attack model in (3) is known as the FDI attack that is a very general type of attack, and includes physical attacks, Trojans, replay attacks, overflow bugs, packet injection, etc guo2018roboads . Because of this generality, this attack model has been widely used in CPS security literature (e.g., pasqualetti2013attack ; teixeira2015secure ; yong2015resilients ).

In the cyber-space, digital attack signals could be unconstrained, but their impact on the physical world is restricted by physical and operational constraints (i.e., 𝒙k{\bm{x}}_{k} and 𝒅k{\bm{d}}_{k} are constrained). For example, a vehicle has a limit on acceleration, velocity, steering angle, and change of steering angle. Any physical constraints and ability limitations on attack signals and states are presented by the inequality constraints

𝒜k𝒅k𝒃k,k𝒙k𝒄k,\displaystyle\mathcal{A}_{k}{\bm{d}}_{k}\leq{\bm{b}}_{k},\ \mathcal{B}_{k}{\bm{x}}_{k}\leq{\bm{c}}_{k}, (4)

where matrices 𝒜k\mathcal{A}_{k}, k\mathcal{B}_{k}, and vectors 𝒃k{\bm{b}}_{k}, 𝒄k{\bm{c}}_{k} are known and bounded with appropriate dimensions. Throughout this paper, we assume that the feasible sets of the constraints in (4) are non-empty.

Remark 1

Gaussian noise in (3) is one of the general ways to model physical systems so that the filtering algorithms use this model to track the level of uncertainties. Therefore, many pieces of work consider Gaussian noise even in the presence of bounded constraints teixeira2009state ; simon2002kalman ; simon2010kalman .

Problem statement

Given the stochastic system in (3), we aim to design an attack-resilient estimation algorithm that can simultaneously estimate the compromised system state 𝒙k{\bm{x}}_{k} and the attack signal 𝒅k{\bm{d}}_{k}. In addition, we seek to improve estimation accuracy and detection performance with a stability guarantee when incorporating the information of the input and state in terms of constraints in (4).

Refer to caption
Figure 1: Constrained Attack-Resilient Estimation (CARE).

3 Algorithm Design

To address the problem statement described in Section 2, we propose a constrained attack-resilient estimation algorithm (CARE), as sketched in Fig. 1, which consists of a minimum-variance unbiased estimator (MVUE) and an information aggregation step via projection. In particular, the optimal estimation provides minimum-variance unbiased estimates, and these estimates are projected onto the constrained space eventually in the information aggregation step. We outline the essential steps of CARE in Section 3.1 and provide a detailed derivation of the algorithm in Section 3.2.

3.1 Algorithm Statement

The proposed CARE can be summarized as follows:

prediction: 𝒙^k=𝑨k1𝒙^k1+𝑩k1𝒖k1;\displaystyle\textbf{prediction: }\hat{{\bm{x}}}_{k}^{-}={\bm{A}}_{k-1}\hat{{\bm{x}}}_{k-1}+{\bm{B}}_{k-1}{\bm{u}}_{k-1}; (5)
attack estimation: 𝒅^k1u=𝑴k(𝒚k𝑪k𝒙^k);\displaystyle\textbf{attack estimation: }\hat{{\bm{d}}}_{k-1}^{u}={\bm{M}}_{k}({\bm{y}}_{k}-{\bm{C}}_{k}\hat{{\bm{x}}}^{-}_{k}); (6)
time update: 𝒙^k=𝒙^k+𝑮k1𝒅^k1u;\displaystyle\textbf{time update: }\hat{{\bm{x}}}^{\star}_{k}=\hat{{\bm{x}}}^{-}_{k}+{\bm{G}}_{k-1}\hat{{\bm{d}}}^{u}_{k-1}; (7)
measurement update: 𝒙^ku=𝒙^k+𝑳k(𝒚k𝑪k𝒙^k);\displaystyle\textbf{measurement update: }\hat{{\bm{x}}}^{u}_{k}=\hat{{\bm{x}}}^{\star}_{k}+{\bm{L}}_{k}({\bm{y}}_{k}-{\bm{C}}_{k}\hat{{\bm{x}}}^{\star}_{k}); (8)
projection update:
𝒅^k1=\displaystyle\hat{{\bm{d}}}_{k-1}= argmin𝒅(𝒅𝒅^k1u)(𝑷k1d,u)1(𝒅𝒅^k1u)\displaystyle\operatorname*{arg\,min}_{\bm{d}}({\bm{d}}-\hat{{\bm{d}}}_{k-1}^{u})^{\top}({\bm{P}}_{k-1}^{d,u})^{-1}({\bm{d}}-\hat{{\bm{d}}}_{k-1}^{u})
subjectto𝒜k1𝒅𝒃k1;\displaystyle{\rm subject\ to\ }\mathcal{A}_{k-1}{\bm{d}}\leq{\bm{b}}_{k-1}; (9)
𝒙^k=\displaystyle\hat{{\bm{x}}}_{k}= argmin𝒙(𝒙𝒙^ku)(𝑷kx,u)1(𝒙𝒙^ku)\displaystyle\operatorname*{arg\,min}_{\bm{x}}({\bm{x}}-\hat{{\bm{x}}}^{u}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}({\bm{x}}-\hat{{\bm{x}}}^{u}_{k})
subjecttok𝒙𝒄k.\displaystyle{\rm subject\ to\ }\mathcal{B}_{k}{\bm{x}}\leq{\bm{c}}_{k}. (10)

Given the previous state estimate 𝒙^k1\hat{{\bm{x}}}_{k-1} and its error covariance 𝑷k1x𝔼[𝒙~k1(𝒙~k1)]{\bm{P}}_{k-1}^{x}\triangleq\mathbb{E}[\tilde{{\bm{x}}}_{k-1}(\tilde{{\bm{x}}}_{k-1})^{\top}], the current state can be predicted by 𝒙^k\hat{{\bm{x}}}_{k}^{-} in (5) under the assumption that the attack signal 𝒅k1{\bm{d}}_{k-1} is absent. The unconstrained attack estimate 𝒅^k1u\hat{{\bm{d}}}_{k-1}^{u} can be obtained by comparing the difference between the predicted output 𝑪k𝒙^k{\bm{C}}_{k}\hat{{\bm{x}}}_{k}^{-} and the measured output 𝒚k{\bm{y}}_{k} in (6), where 𝑴k{\bm{M}}_{k} is the optimal filter gain that can be obtained by applying Gauss-Markov theorem, as shown in Proposition 3 later. The state prediction 𝒙^k\hat{{\bm{x}}}_{k}^{-} can be updated incorporating the unconstrained attack estimate 𝒅^k1u\hat{{\bm{d}}}_{k-1}^{u} in (7). The output 𝒚k{\bm{y}}_{k} is used to correct the current state estimate in (8), where 𝑳k{\bm{L}}_{k} is the filter gain that is obtained by minimizing the state error covariance 𝑷kx,u{\bm{P}}_{k}^{x,u}. In the information aggregation step (projection update), we apply the input constraint in (9) by projecting 𝒅^k1u\hat{{\bm{d}}}_{k-1}^{u} onto the constrained space and obtain the constrained attack estimate 𝒅^k1\hat{{\bm{d}}}_{k-1}. Similarly, the state constraint in (10) is applied to obtain the constrained state estimate 𝒙^k\hat{{\bm{x}}}_{k}. The complete algorithm is presented in Algorithm 1.

Algorithm 1 Constrained Attack-Resilient Estimation: [𝒅^k1[\hat{{\bm{d}}}_{k-1}, 𝑷k1d{\bm{P}}^{d}_{k-1}, 𝒙^k\hat{{\bm{x}}}_{k}, 𝑷kx]={\bm{P}}^{x}_{k}]= CARE(𝒙^k1,𝑷k1x)(\hat{{\bm{x}}}_{k-1},{\bm{P}}^{x}_{k-1})
1:\triangleright Prediction
2:𝒙^k=𝑨k1𝒙^k1+𝑩k1𝒖k1\hat{{\bm{x}}}_{k}^{-}={\bm{A}}_{k-1}\hat{{\bm{x}}}_{k-1}+{\bm{B}}_{k-1}{\bm{u}}_{k-1};
3:𝑷kx,=𝑨k1𝑷k1x𝑨k1+𝑸k1{\bm{P}}_{k}^{x,-}={\bm{A}}_{k-1}{\bm{P}}^{x}_{k-1}{\bm{A}}_{k-1}^{\top}+{\bm{Q}}_{k-1};
4:\triangleright Attack estimation
5:𝑹~k=(𝑪k𝑷kx,𝑪k+𝑹k)1\tilde{{\bm{R}}}_{k}=({\bm{C}}_{k}{\bm{P}}_{k}^{x,-}{\bm{C}}_{k}^{\top}+{\bm{R}}_{k})^{-1};
6:𝑴k=(𝑮k1𝑪k𝑹~k𝑪k𝑮k1)1𝑮k1𝑪k𝑹~k{\bm{M}}_{k}=({\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}\tilde{{\bm{R}}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1})^{-1}{\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}\tilde{{\bm{R}}}_{k};
7:𝒅^k1u=𝑴k(𝒚k𝑪k𝒙^k)\hat{{\bm{d}}}_{k-1}^{u}={\bm{M}}_{k}({\bm{y}}_{k}-{\bm{C}}_{k}\hat{{\bm{x}}}_{k}^{-});
8:𝑷k1d,u=(𝑮k1𝑪k𝑹~k𝑪k𝑮k1)1{\bm{P}}^{d,u}_{k-1}=({\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}\tilde{{\bm{R}}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1})^{-1};
9:𝑷k1xd=𝑷k1x𝑨k1𝑪k𝑴k{\bm{P}}_{k-1}^{xd}=-{\bm{P}}^{x}_{k-1}{\bm{A}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}{\bm{M}}_{k}^{\top};
10:\triangleright Time update
11:𝒙^k=𝒙^k+𝑮k1𝒅^k1u\hat{{\bm{x}}}^{\star}_{k}=\hat{{\bm{x}}}_{k}^{-}+{\bm{G}}_{k-1}\hat{{\bm{d}}}_{k-1}^{u};
12:𝑷kx=𝑨k1𝑷k1x𝑨k1+𝑨k1𝑷k1xd𝑮k1{\bm{P}}^{x\star}_{k}={\bm{A}}_{k-1}{\bm{P}}_{k-1}^{x}{\bm{A}}_{k-1}^{\top}+{\bm{A}}_{k-1}{\bm{P}}_{k-1}^{xd}{\bm{G}}_{k-1}^{\top}
13:               +𝑮k1(𝑷k1xd)𝑨k1+𝑮k1𝑷k1d,u𝑮k1+{\bm{G}}_{k-1}({\bm{P}}_{k-1}^{xd})^{\top}{\bm{A}}_{k-1}^{\top}+{\bm{G}}_{k-1}{\bm{P}}_{k-1}^{d,u}{\bm{G}}_{k-1}^{\top}
14:               𝑮k1𝑴k𝑪k𝑸k1𝑸k1𝑪k𝑴k𝑮k1-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{C}}_{k}{\bm{Q}}_{k-1}-{\bm{Q}}_{k-1}{\bm{C}}_{k}^{\top}{\bm{M}}_{k}^{\top}{\bm{G}}_{k-1}^{\top}
15:               +𝑸k1+{\bm{Q}}_{k-1};
16:𝑹~k=𝑪k𝑷kx𝑪k𝑪k𝑮k1𝑴k𝑹k𝑹k𝑴k𝑮k1𝑪k\tilde{{\bm{R}}}^{\star}_{k}={\bm{C}}_{k}{\bm{P}}^{x\star}_{k}{\bm{C}}_{k}^{\top}-{\bm{C}}_{k}{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{R}}_{k}-{\bm{R}}_{k}{\bm{M}}_{k}^{\top}{\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}
17:               +𝑹k+{\bm{R}}_{k};
18:\triangleright Measurement update
19:𝑳k=(𝑷kx𝑪k𝑮k1𝑴k𝑹k)𝑹~k{\bm{L}}_{k}=({\bm{P}}^{x\star}_{k}{\bm{C}}_{k}^{\top}-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{R}}_{k})\tilde{{\bm{R}}}^{\star\dagger}_{k};
20:𝒙^ku=𝒙^k+𝑳k(𝒚k𝑪k𝒙^k)\hat{{\bm{x}}}^{u}_{k}=\hat{{\bm{x}}}^{\star}_{k}+{\bm{L}}_{k}({\bm{y}}_{k}-{\bm{C}}_{k}\hat{{\bm{x}}}^{\star}_{k});
21:𝑷kx,u=(𝑰𝑳k𝑪k)𝑮k1𝑴k𝑹k𝑳k{\bm{P}}^{x,u}_{k}=({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k}){\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{R}}_{k}{\bm{L}}_{k}^{\top}
22:               +𝑳k𝑹k𝑴k𝑮k1(𝑰𝑳k𝑪k)+{\bm{L}}_{k}{\bm{R}}_{k}{\bm{M}}_{k}^{\top}{\bm{G}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}
23:               +(𝑰𝑳k𝑪k)𝑷kx(𝑰𝑳k𝑪k)+𝑳k𝑹k𝑳k+({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k}){\bm{P}}^{x\star}_{k}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}+{\bm{L}}_{k}{\bm{R}}_{k}{\bm{L}}_{k}^{\top};
24:\triangleright Projection update
25:𝜸k1d=𝑷k1d,u𝒜¯k1(𝒜¯k1𝑷k1d,u𝒜¯k1)1\bm{\gamma}_{k-1}^{d}={\bm{P}}_{k-1}^{d,u}\bar{\mathcal{A}}_{k-1}^{\top}(\bar{\mathcal{A}}_{k-1}{\bm{P}}_{k-1}^{d,u}\bar{\mathcal{A}}_{k-1}^{\top})^{-1};
26:𝒅^k1=𝒅^k1u𝜸k1d(𝒜¯k1𝒅^k1u𝒃¯k1)\hat{{\bm{d}}}_{k-1}=\hat{{\bm{d}}}_{k-1}^{u}-\bm{\gamma}_{k-1}^{d}(\bar{\mathcal{A}}_{k-1}\hat{{\bm{d}}}_{k-1}^{u}-\bar{{\bm{b}}}_{k-1});
27:𝑷k1d=(𝑰𝜸k1d𝓐¯k1)𝑷k1d,u(𝑰𝜸k1d𝒜¯k1){\bm{P}}^{d}_{k-1}=({\bm{I}}-\bm{\gamma}_{k-1}^{d}\bar{\mathcal{{\bm{A}}}}_{k-1}){\bm{P}}^{d,u}_{k-1}({\bm{I}}-\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1})^{\top};
28:𝜸kx=𝑷kx,u¯k(¯k𝑷kx,u¯k)1\bm{\gamma}_{k}^{x}={\bm{P}}_{k}^{x,u}\bar{\mathcal{B}}_{k}^{\top}(\bar{\mathcal{B}}_{k}{\bm{P}}_{k}^{x,u}\bar{\mathcal{B}}_{k}^{\top})^{-1};
29:𝒙^k=𝒙^ku𝜸kx(¯k𝒙^ku𝒄¯k)\hat{{\bm{x}}}_{k}=\hat{{\bm{x}}}^{u}_{k}-\bm{\gamma}_{k}^{x}(\bar{\mathcal{B}}_{k}\hat{{\bm{x}}}^{u}_{k}-\bar{{\bm{c}}}_{k});
30:𝑷kx=(𝑰𝜸kx¯k)𝑷kx,u(𝑰𝜸kx¯k){\bm{P}}^{x}_{k}=({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}){\bm{P}}^{x,u}_{k}({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})^{\top};

3.2 Algorithm Derivation

Prediction

The current state can be predicted by (5) under the assumption that the attack signal 𝒅k1=0{\bm{d}}_{k-1}=0. The prediction error covariance is

𝑷kx,𝔼[𝒙~k(𝒙~k)]=𝑨k1𝑷k1x𝑨k1+𝑸k1.\displaystyle{\bm{P}}_{k}^{x{},-}\triangleq\mathbb{E}[\tilde{{\bm{x}}}_{k}^{-}(\tilde{{\bm{x}}}_{k}^{-})^{\top}]={\bm{A}}_{k-1}{\bm{P}}^{x{}}_{k-1}{\bm{A}}_{k-1}^{\top}+{\bm{Q}}_{k-1}. (11)

Attack estimation

The linear attack estimator in (6) utilizes the difference between the measured output 𝒚k{\bm{y}}_{k} and the predicted output 𝑪k𝒙^k{\bm{C}}_{k}\hat{{\bm{x}}}^{-}_{k}. Substituting (3) and (5) into (6), we have

𝒅^k1u=𝑴k(\displaystyle\hat{{\bm{d}}}_{k-1}^{u}={\bm{M}}_{k}\big{(} 𝑪k𝑨k1𝒙~k1+𝑪k𝑮k1𝒅k1\displaystyle{\bm{C}}_{k}{\bm{A}}_{k-1}\tilde{{\bm{x}}}_{k-1}+{\bm{C}}_{k}{\bm{G}}_{k-1}{\bm{d}}_{k-1}
+𝑪k𝒘k1+𝒗k),\displaystyle+{\bm{C}}_{k}{\bm{w}}_{k-1}+{\bm{v}}_{k}\big{)},

which is a linear function of the attack signal 𝒅k1{\bm{d}}_{k-1}. Under the assumption that there is no projection update, i.e., the state and attack estimates are unconstrained, we design the optimal gain matrix 𝑴k{\bm{M}}_{k} such that the estimate becomes the best linear unbiased estimate (BLUE) by the following two propositions.

Proposition 2

Assume that there is no projection update and 𝔼[𝐱~0]=𝔼[𝐱~0]=0\mathbb{E}[\tilde{{\bm{x}}}_{0}]=\mathbb{E}[\tilde{{\bm{x}}}^{\star}_{0}]=0. The state estimates 𝐱^k\hat{{\bm{x}}}_{k} and the unconstrained attack estimates 𝐝^ku\hat{{\bm{d}}}^{u}_{k} are unbiased for all kk, i.e. 𝔼[𝐱~k]=𝔼[𝐝~k1u]=0,k\mathbb{E}[\tilde{{\bm{x}}}_{k}]=\mathbb{E}[\tilde{{\bm{d}}}^{u}_{k-1}]=0,\ \forall k, if and only if 𝐌k𝐂k𝐆k1=𝐈{\bm{M}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1}={\bm{I}}.

Proof: Sufficiency: Assuming that 𝑴k𝑪k𝑮k1=𝑰{\bm{M}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1}={\bm{I}}, the statement can be proved by induction. First, we will show the statement holds when k=0k=0 as a base case. By the definition, the errors of the time update and the measurement update in (7) and (8) are given by

𝒙~k\displaystyle\tilde{{\bm{x}}}^{\star}_{k} 𝒙k𝒙^k=𝑨k1𝒙~k1+𝑮k1𝒅~k1u+𝒘k1\displaystyle\triangleq{\bm{x}}_{k}-\hat{{\bm{x}}}^{\star}_{k}={\bm{A}}_{k-1}\tilde{{\bm{x}}}_{k-1}+{\bm{G}}_{k-1}\tilde{{\bm{d}}}^{u}_{k-1}+{\bm{w}}_{k-1} (12)
𝒙~ku\displaystyle\tilde{{\bm{x}}}^{u}_{k} 𝒙k𝒙^ku=(𝑰𝑳k𝑪k)𝒙~k𝑳k𝒗k,\displaystyle\triangleq{\bm{x}}_{k}-\hat{{\bm{x}}}^{u}_{k}=({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\tilde{{\bm{x}}}^{\star}_{k}-{\bm{L}}_{k}{\bm{v}}_{k}, (13)

and the error of the unconstrained attack estimate is

𝒅~k1u\displaystyle\tilde{{\bm{d}}}^{u}_{k-1} 𝒅k1𝒅^k1u=𝒅k1𝑴k(𝑪k𝑨k1𝒙~k1\displaystyle\triangleq{\bm{d}}_{k-1}-\hat{{\bm{d}}}^{u}_{k-1}={\bm{d}}_{k-1}-{\bm{M}}_{k}\big{(}{\bm{C}}_{k}{\bm{A}}_{k-1}\tilde{{\bm{x}}}_{k-1}
+𝑪k𝑮k1𝒅k1+𝑪k𝒘k1+𝒗k)\displaystyle+{\bm{C}}_{k}{\bm{G}}_{k-1}{\bm{d}}_{k-1}+{\bm{C}}_{k}{\bm{w}}_{k-1}+{\bm{v}}_{k}\big{)}
=(𝑰𝑴k𝑪k𝑮k1)𝒅k1\displaystyle=({\bm{I}}-{\bm{M}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1}){\bm{d}}_{k-1} (14a)
𝑴k(𝑪k𝑨k1𝒙~k1+𝑪k𝒘k1+𝒗k).\displaystyle-{\bm{M}}_{k}\big{(}{\bm{C}}_{k}{\bm{A}}_{k-1}\tilde{{\bm{x}}}_{k-1}+{\bm{C}}_{k}{\bm{w}}_{k-1}+{\bm{v}}_{k}\big{)}. (14b)

Under the assumptions that 𝔼[𝒙~0]=𝔼[𝒙~0]=0\mathbb{E}[\tilde{{\bm{x}}}_{0}]=\mathbb{E}[\tilde{{\bm{x}}}^{\star}_{0}]=0 and the process noise and measurement noise are zero-mean Gaussian, i.e. 𝔼[𝒘k]=𝔼[𝒗k]=0,k\mathbb{E}[{\bm{w}}_{k}]=\mathbb{E}[{\bm{v}}_{k}]=0,\ \forall k, the expectation of the term (14b) is zero at k=1k=1. Since 𝒅k1{\bm{d}}_{k-1} is deterministic for all kk, i.e., 𝔼[𝒅k1]0\mathbb{E}[{{\bm{d}}_{k-1}}]\neq 0, we have 𝔼[𝒅~0u]=0\mathbb{E}[\tilde{{\bm{d}}}^{u}_{0}]=0 if 𝑰𝑴1𝑪1𝑮0=0{\bm{I}}-{\bm{M}}_{1}{\bm{C}}_{1}{\bm{G}}_{0}=0, i.e., the expectation of the term (14a) is zero at k=1k=1.Then we have 𝔼[𝒙~1]=𝔼[𝒙~1u]=0\mathbb{E}[\tilde{{\bm{x}}}^{\star}_{1}]=\mathbb{E}[\tilde{{\bm{x}}}^{u}_{1}]=0 by applying expectation operation on (12) and (13). In the inductive step, suppose 𝔼[𝒙~ku]=𝔼[𝒙~k]=0\mathbb{E}[\tilde{{\bm{x}}}^{u}_{k}]=\mathbb{E}[\tilde{{\bm{x}}}^{\star}_{k}]=0; then 𝔼[𝒅~ku]=0\mathbb{E}[\tilde{{\bm{d}}}^{u}_{k}]=0 if 𝑴k+1𝑪k+1𝑮k=𝑰{\bm{M}}_{k+1}{\bm{C}}_{k+1}{\bm{G}}_{k}={\bm{I}}. Then, similarly, we have 𝔼[𝒙~k+1]=𝔼[𝒙~k+1u]=0\mathbb{E}[\tilde{{\bm{x}}}^{\star}_{k+1}]=\mathbb{E}[\tilde{{\bm{x}}}^{u}_{k+1}]=0 by (12) and (13). Since there is no projection update, we have 𝔼[𝒙~k]=𝔼[𝒙~ku]=0k\mathbb{E}[\tilde{{\bm{x}}}_{k}]=\mathbb{E}[\tilde{{\bm{x}}}^{u}_{k}]=0\ \forall k.

Necessity: Assuming that 𝔼[𝐱~k]=𝔼[𝐝~k1]=0\mathbb{E}[\tilde{{\bm{x}}}_{k}]=\mathbb{E}[\tilde{{\bm{d}}}_{k-1}]=0 for all kk, or equivalently 𝔼[𝐱~ku]=𝔼[𝐝~k1u]=0\mathbb{E}[\tilde{{\bm{x}}}^{u}_{k}]=\mathbb{E}[\tilde{{\bm{d}}}^{u}_{k-1}]=0, the statement also can be proved by induction. In (14), if 𝔼[𝐝~0u]=0\mathbb{E}[\tilde{{\bm{d}}}^{u}_{0}]=0 for any 𝐝0{\bm{d}}_{0}, we have 𝐌1𝐂1𝐆0=𝐈{\bm{M}}_{1}{\bm{C}}_{1}{\bm{G}}_{0}={\bm{I}}. Therefore, following a similar procedure, we can show that the necessity holds. \blacksquare

Proposition 3

Assume that there is no projection update and 𝔼[𝐱~0]=𝔼[𝐱~0]=0\mathbb{E}[\tilde{{\bm{x}}}_{0}]=\mathbb{E}[\tilde{{\bm{x}}}^{\star}_{0}]=0. The unconstrained attack estimates 𝐝^ku\hat{{\bm{d}}}^{u}_{k} are BLUE if

𝑴k=(𝑮k1𝑪k𝑹~k𝑪k𝑮k1)1𝑮k1𝑪k𝑹~k,\displaystyle{\bm{M}}_{k}=\big{(}{\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}\tilde{{\bm{R}}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1}\big{)}^{-1}{\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}\tilde{{\bm{R}}}_{k}, (15)

where 𝐑~k(𝐂k𝐏kx,𝐂k+𝐑k)1\tilde{{\bm{R}}}_{k}\triangleq({\bm{C}}_{k}{\bm{P}}_{k}^{x{},-}{\bm{C}}_{k}^{\top}+{\bm{R}}_{k})^{-1}.

Proof: Substituting (3a) into (3b), we have

𝒚k\displaystyle{\bm{y}}_{k} =𝑪k𝑮k1𝒅k1\displaystyle={\bm{C}}_{k}{\bm{G}}_{k-1}{\bm{d}}_{k-1}
+𝑪k(𝑨k1𝒙k1+𝑩k1𝒖k1+𝒘k1)+𝒗k.\displaystyle+{\bm{C}}_{k}\big{(}{\bm{A}}_{k-1}{\bm{x}}_{k-1}+{\bm{B}}_{k-1}{\bm{u}}_{k-1}+{\bm{w}}_{k-1}\big{)}+{\bm{v}}_{k}. (16)

Subtraction of 𝑪k𝒙^k1{\bm{C}}_{k}\hat{{\bm{x}}}_{k-1} on the both sides of (16) yields

𝒚k𝑪k𝒙^k1\displaystyle{\bm{y}}_{k}-{\bm{C}}_{k}\hat{{\bm{x}}}_{k-1} =𝑪k𝑮k1𝒅k1\displaystyle={\bm{C}}_{k}{\bm{G}}_{k-1}{\bm{d}}_{k-1}
+𝑪k(𝑨k1𝒙~k1+𝒘k1)+𝒗kerror term.\displaystyle\underbrace{+{\bm{C}}_{k}\big{(}{\bm{A}}_{k-1}\tilde{{\bm{x}}}^{-}_{k-1}+{\bm{w}}_{k-1}\big{)}+{\bm{v}}_{k}}_{\textit{error term}}. (17)

Since the covariances of the process noise 𝒘k1{\bm{w}}_{k-1} and the measurement noise 𝒗k{\bm{v}}_{k} are known, with (11), the covariance of the error term in (17) can be expressed as 𝑪k𝑷kx,𝑪k+𝑹k{\bm{C}}_{k}{\bm{P}}_{k}^{x{},-}{\bm{C}}_{k}^{\top}+{\bm{R}}_{k}. Applying the Gauss-Markov theorem (see Appendix A), we can get the minimum-variance-unbiased linear estimator (BLUE) of 𝒅k1{\bm{d}}_{k-1} in (6) with 𝑴k=(𝑮k1𝑪k𝑹~k𝑪k𝑮k1)1𝑮k1𝑪k𝑹~k{\bm{M}}_{k}=\big{(}{\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}\tilde{{\bm{R}}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1}\big{)}^{-1}{\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}\tilde{{\bm{R}}}_{k}, where 𝑹~k(𝑪k𝑷kx,𝑪k+𝑹k)1\tilde{{\bm{R}}}_{k}\triangleq({\bm{C}}_{k}{\bm{P}}_{k}^{x{},-}{\bm{C}}_{k}^{\top}+{\bm{R}}_{k})^{-1}. \blacksquare

Remark 4

The rank condition rank(𝐂k𝐆k1)=nd\operatorname*{rank}({\bm{C}}_{k}{\bm{G}}_{k-1})=n_{d} is the sufficient condition of 𝐌k𝐂k𝐆k1=𝐈{\bm{M}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1}={\bm{I}} needed in Proposition 2 if 𝐌k{\bm{M}}_{k} is found by (15) in Proposition 3.

The error covariance can be found by 𝑷k1d,u=𝑴k𝑹~k1𝑴k=(𝑮k1𝑪k𝑹~k𝑪k𝑮k1)1.{\bm{P}}_{k-1}^{d,u}={\bm{M}}_{k}\tilde{{\bm{R}}}_{k}^{-1}{\bm{M}}_{k}^{\top}=\big{(}{\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}\tilde{{\bm{R}}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1}\big{)}^{-1}. The cross error covariance of the state estimate and the attack estimate is 𝑷k1xd=𝑷k1x𝑨k1𝑪k𝑴k.{\bm{P}}_{k-1}^{xd}=-{\bm{P}}^{x}_{k-1}{\bm{A}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}{\bm{M}}_{k}^{\top}.

Time update

Given the unconstrained attack estimate 𝒅^k1u\hat{{\bm{d}}}^{u}_{k-1}, the state prediction 𝒙^k\hat{{\bm{x}}}^{-}_{k} can be updated as in (7). We derive the error covariance of 𝒙^k\hat{{\bm{x}}}^{\star}_{k} as

𝑷kx\displaystyle{\bm{P}}_{k}^{x\star} 𝔼[𝒙~k(𝒙~k)]\displaystyle\triangleq\mathbb{E}\big{[}\tilde{{\bm{x}}}^{\star}_{k}(\tilde{{\bm{x}}}^{\star}_{k})^{\top}\big{]}
=𝑨k1𝑷k1x𝑨k1+𝑨k1𝑷k1xd𝑮k1\displaystyle={\bm{A}}_{k-1}{\bm{P}}_{k-1}^{x}{\bm{A}}_{k-1}^{\top}+{\bm{A}}_{k-1}{\bm{P}}_{k-1}^{xd}{\bm{G}}_{k-1}^{\top}
+𝑮k1𝑷k1dx𝑨k1+𝑮k1𝑷k1d,u𝑮k1+𝑸k1\displaystyle+{\bm{G}}_{k-1}{\bm{P}}_{k-1}^{dx}{\bm{A}}_{k-1}^{\top}+{\bm{G}}_{k-1}{\bm{P}}_{k-1}^{d,u}{\bm{G}}_{k-1}^{\top}+{\bm{Q}}_{k-1}
𝑮k1𝑴k𝑪k𝑸k1𝑸k1𝑪k𝑴k𝑮k1,\displaystyle-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{C}}_{k}{\bm{Q}}_{k-1}-{\bm{Q}}_{k-1}{\bm{C}}_{k}^{\top}{\bm{M}}_{k}^{\top}{\bm{G}}_{k-1}^{\top},

where 𝑷k1dx=(𝑷k1xd){\bm{P}}_{k-1}^{dx}=({\bm{P}}_{k-1}^{xd})^{\top}.

Measurement update

In this step, the measurement 𝒚k{\bm{y}}_{k} is used to update the propagated estimate 𝒙^k\hat{{\bm{x}}}^{\star}_{k} as shown in (8). The covariance of the state estimation error is

𝑷kx,u\displaystyle{\bm{P}}^{x,u}_{k}\triangleq 𝔼[(𝒙~ku)(𝒙~ku)]\displaystyle\mathbb{E}[(\tilde{{\bm{x}}}_{k}^{u})(\tilde{{\bm{x}}}_{k}^{u})^{\top}]
=\displaystyle= (𝑰𝑳k𝑪k)𝑮k1𝑴k𝑹k𝑳k+𝑳k𝑹k𝑳k\displaystyle({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k}){\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{R}}_{k}{\bm{L}}_{k}^{\top}+{\bm{L}}_{k}{\bm{R}}_{k}{\bm{L}}_{k}^{\top}
+𝑳k𝑹k𝑴k𝑮k1(𝑰𝑳k𝑪k)\displaystyle+{\bm{L}}_{k}{\bm{R}}_{k}{\bm{M}}_{k}^{\top}{\bm{G}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}
+(𝑰𝑳k𝑪k)𝑷kx(𝑰𝑳k𝑪k).\displaystyle+({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k}){\bm{P}}^{x\star}_{k}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}.

The gain matrix 𝑳k{\bm{L}}_{k} is obtained by minimizing the trace of 𝑷kx,u{\bm{P}}^{x,u}_{k}, i.e. min𝑳ktr(𝑷kx,u).\min_{{\bm{L}}_{k}}\operatorname*{tr}({\bm{P}}^{x,u}_{k}). The solution is given by 𝑳k=(𝑷kx𝑪k𝑮k1𝑴k𝑹k)𝑹~k,{\bm{L}}_{k}=({\bm{P}}^{x\star}_{k}{\bm{C}}_{k}^{\top}-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{R}}_{k})\tilde{{\bm{R}}}^{\star\dagger}_{k}, where 𝑹~k𝑪k𝑷kx𝑪k+𝑹k𝑪k𝑮k1𝑴k𝑹k𝑹k𝑴k𝑮k1𝑪k\tilde{{\bm{R}}}^{\star}_{k}\triangleq{\bm{C}}_{k}{\bm{P}}^{x\star}_{k}{\bm{C}}_{k}^{\top}+{\bm{R}}_{k}-{\bm{C}}_{k}{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{R}}_{k}-{\bm{R}}_{k}{\bm{M}}_{k}^{\top}{\bm{G}}_{k-1}^{\top}{\bm{C}}_{k}^{\top}.

Projection update

We are now in the position to project the estimates onto the constrained space. Apply the first constraint in (4) to the unconstrained attack estimate 𝒅^k1u\hat{{\bm{d}}}_{k-1}^{u}, and the attack estimation problem can be formulated as the following constrained convex optimization problem

𝒅^k1=argmin𝒅(𝒅𝒅^k1u)𝑾k1d(𝒅𝒅^k1u)subject to𝒜k1𝒅𝒃k1,\displaystyle\begin{split}\hat{{\bm{d}}}_{k-1}&=\operatorname*{arg\,min}\limits_{\bm{d}}({\bm{d}}-\hat{{\bm{d}}}_{k-1}^{u})^{\top}{\bm{W}}^{d}_{k-1}({\bm{d}}-\hat{{\bm{d}}}_{k-1}^{u})\\ &\text{subject to}\ \mathcal{A}_{k-1}{\bm{d}}\leq{\bm{b}}_{k-1},\end{split} (18)

where 𝑾k1d{\bm{W}}^{d}_{k-1} can be any positive definite symmetric weighting matrix. In the current paper, we select 𝑾k1d=(𝑷k1d,u)1{\bm{W}}_{k-1}^{d}=({\bm{P}}_{k-1}^{d,u})^{-1} which results in the smallest error covariance as shown in simon2002kalman . From Karush-Kuhn-Tucker (KKT) conditions of optimality, we can find the corresponding active constraints. We denote 𝒜¯k\bar{\mathcal{A}}_{k} and 𝒃¯k\bar{{\bm{b}}}_{k} the rows of 𝒜k\mathcal{A}_{k} and the elements of 𝒃k{\bm{b}}_{k} corresponding to the active constraints of 𝒜k1𝒅𝒃k1\mathcal{A}_{k-1}{\bm{d}}\leq{\bm{b}}_{k-1}. Then (18) becomes

𝒅^k1=argmin𝒅(𝒅𝒅^k1u)(𝑷k1d,u)1(𝒅𝒅^k1u)subject to𝒜¯k1𝒅=𝒃¯k1.\displaystyle\begin{split}\hat{{\bm{d}}}_{k-1}&=\operatorname*{arg\,min}\limits_{\bm{d}}({\bm{d}}-\hat{{\bm{d}}}_{k-1}^{u})^{\top}({\bm{P}}_{k-1}^{d,u})^{-1}({\bm{d}}-\hat{{\bm{d}}}_{k-1}^{u})\\ &\text{subject to}\ \bar{\mathcal{A}}_{k-1}{\bm{d}}=\bar{{\bm{b}}}_{k-1}.\end{split} (19)

The solution of (19) can be found by 𝒅^k1=𝒅^k1u𝜸k1d(𝒜¯k1𝒅^k1u𝒃¯k1),\hat{{\bm{d}}}_{k-1}=\hat{{\bm{d}}}_{k-1}^{u}-\bm{\gamma}_{k-1}^{d}(\bar{\mathcal{A}}_{k-1}\hat{{\bm{d}}}_{k-1}^{u}-\bar{{\bm{b}}}_{k-1}), where

𝜸k1d\displaystyle\bm{\gamma}_{k-1}^{d} 𝑷k1d,u𝒜¯k1(𝒜¯k1𝑷k1d,u𝒜¯k1)1.\displaystyle\triangleq{\bm{P}}_{k-1}^{d,u}\bar{\mathcal{A}}_{k-1}^{\top}(\bar{\mathcal{A}}_{k-1}{\bm{P}}_{k-1}^{d,u}\bar{\mathcal{A}}_{k-1}^{\top})^{-1}. (20)

The attack estimation error is

𝒅~k1\displaystyle\tilde{{\bm{d}}}_{k-1} =(𝑰𝜸k1d𝒜¯k1)𝒅~k1u+𝜸k1d(𝒜¯k1𝒅k1𝒃¯k1)\displaystyle=({\bm{I}}-\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1})\tilde{{\bm{d}}}_{k-1}^{u}+\bm{\gamma}_{k-1}^{d}(\bar{\mathcal{A}}_{k-1}{\bm{d}}_{k-1}-\bar{{\bm{b}}}_{k-1})
=𝒅^k1u𝜸k1d(𝒜¯k1𝒅^k1u𝒃¯k1).\displaystyle=\hat{{\bm{d}}}^{u}_{k-1}-\bm{\gamma}_{k-1}^{d}(\bar{\mathcal{A}}_{k-1}\hat{{\bm{d}}}^{u}_{k-1}-\bar{{\bm{b}}}_{k-1}). (21)

The error covariance can be found by

𝑷k1d\displaystyle{\bm{P}}^{d}_{k-1} 𝔼[𝒅~k1𝒅~k1]\displaystyle\triangleq{\mathbb{E}}[\tilde{{\bm{d}}}_{k-1}\tilde{{\bm{d}}}_{k-1}^{\top}]
=(𝑰𝜸k1d𝒜¯k1)𝑷k1d,u(𝑰𝜸k1d𝒜¯k1)\displaystyle=({\bm{I}}-\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1}){\bm{P}}^{d,u}_{k-1}({\bm{I}}-\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1})^{\top} (22)

under the assumption that 𝜸k1d(𝒜¯k1𝒅k1𝒃¯k1)=0\bm{\gamma}_{k-1}^{d}(\bar{\mathcal{A}}_{k-1}{\bm{d}}_{k-1}-\bar{{\bm{b}}}_{k-1})=0 holds. Notice that this assumption holds when the ground truth 𝒅k1{\bm{d}}_{k-1} satisfies the active constraint 𝒜¯k1𝒅k1=𝒃¯k1\bar{\mathcal{A}}_{k-1}{\bm{d}}_{k-1}=\bar{{\bm{b}}}_{k-1}. From (20), it can be verified that 𝜸k1d𝒜¯k1𝑷k1d,u=𝜸k1d𝒜¯k1𝑷k1d,u(𝜸k1d𝒜¯k1)\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1}{\bm{P}}^{d,u}_{k-1}=\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1}{\bm{P}}^{d,u}_{k-1}(\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1})^{\top}. Therefore, from (22) we have

𝑷k1d=\displaystyle{\bm{P}}^{d}_{k-1}= 𝑷k1d,u𝜸k1d𝒜¯k1𝑷k1d,u𝑷k1d,u(𝜸k1d𝒜¯k1)\displaystyle{\bm{P}}^{d,u}_{k-1}-\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1}{\bm{P}}^{d,u}_{k-1}-{\bm{P}}^{d,u}_{k-1}(\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1})^{\top}
+𝜸k1d𝒜¯k1𝑷k1d,u(𝜸k1d𝒜¯k1)\displaystyle+\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1}{\bm{P}}^{d,u}_{k-1}(\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1})^{\top}
=\displaystyle= (𝑰𝜸k1d𝒜¯k1)𝑷k1d,u.\displaystyle({\bm{I}}-\bm{\gamma}_{k-1}^{d}\bar{\mathcal{A}}_{k-1}){\bm{P}}^{d,u}_{k-1}. (23)

Similarly, applying the second constraint in (4) to the unconstrained state estimate 𝒙^ku\hat{{\bm{x}}}_{k}^{u}, we formalize the state estimation problem as follows:

𝒙^k=argmin𝒙(𝒙𝒙^ku)𝑾kx(𝒙𝒙^ku)subject tok𝒙𝒄k,\displaystyle\begin{split}\hat{{\bm{x}}}_{k}&=\operatorname*{arg\,min}_{{\bm{x}}}({\bm{x}}-\hat{{\bm{x}}}^{u}_{k})^{\top}{\bm{W}}^{x}_{k}({\bm{x}}-\hat{{\bm{x}}}^{u}_{k})\\ &\text{subject to}\ \mathcal{B}_{k}{\bm{x}}\leq{\bm{c}}_{k},\end{split} (24)

where we select 𝑾kx=(𝑷kx,u)1{\bm{W}}^{x}_{k}=({\bm{P}}^{x,u}_{k})^{-1} for the smallest error covariance. We denote ¯k\bar{\mathcal{B}}_{k} and 𝒄¯k\bar{{\bm{c}}}_{k} the rows of k\mathcal{B}_{k} and the elements of 𝒄k{\bm{c}}_{k} corresponding to the active constraints of k𝒙𝒄k\mathcal{B}_{k}{\bm{x}}\leq{\bm{c}}_{k}. Using the active constraints, we reformulate (24) as follows:

𝒙^k=argmin𝒙(𝒙𝒙^ku)(𝑷kx,u)1(𝒙𝒙^ku)subject to¯k𝒙=𝒄¯k.\displaystyle\begin{split}\hat{{\bm{x}}}_{k}&=\operatorname*{arg\,min}\limits_{\bm{x}}({\bm{x}}-\hat{{\bm{x}}}^{u}_{k})^{\top}({\bm{P}}^{x,u}_{k})^{-1}({\bm{x}}-\hat{{\bm{x}}}^{u}_{k})\\ &\text{subject to}\ \bar{\mathcal{B}}_{k}{\bm{x}}=\bar{{\bm{c}}}_{k}.\end{split} (25)

The solution of (25) is given by 𝒙^k=𝒙^ku𝜸kx(¯k𝒙^ku𝒄¯k),\hat{{\bm{x}}}_{k}=\hat{{\bm{x}}}^{u}_{k}-\bm{\gamma}_{k}^{x}(\bar{\mathcal{B}}_{k}\hat{{\bm{x}}}^{u}_{k}-\bar{{\bm{c}}}_{k}), where

𝜸kx\displaystyle\bm{\gamma}_{k}^{x} 𝑷kx,u¯k(¯k𝑷kx,u¯k)1.\displaystyle\triangleq{\bm{P}}^{x,u}_{k}\bar{\mathcal{B}}_{k}^{\top}(\bar{\mathcal{B}}_{k}{\bm{P}}^{x,u}_{k}\bar{\mathcal{B}}_{k}^{\top})^{-1}. (26)

Under the assumption that 𝜸kx(¯k𝒙^ku𝒄¯k)=0\bm{\gamma}_{k}^{x}(\bar{\mathcal{B}}_{k}\hat{{\bm{x}}}^{u}_{k}-\bar{{\bm{c}}}_{k})=0 holds, the state estimation error covariance can be expressed as

𝑷kx\displaystyle{\bm{P}}^{x}_{k} =𝚪¯k𝑷kx,u𝚪¯k,\displaystyle=\bar{\bm{\Gamma}}_{k}{\bm{P}}^{x,u}_{k}\bar{\bm{\Gamma}}_{k}^{\top}, (27)

where 𝚪¯k𝑰𝜸kx¯k\bar{\bm{\Gamma}}_{k}\triangleq{\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}. Notice that this assumption holds when the ground truth 𝒙k{\bm{x}}_{k} satisfies the active constraint ¯k𝒙k=𝒄¯k\bar{\mathcal{B}}_{k}{\bm{x}}_{k}=\bar{{\bm{c}}}_{k}.

4 Performance and Stability Analysis

In Section 4.1, we show that the projection induced by inequality constraints improves attack-resilient estimation accuracy and detection performance by decreasing estimation errors and the false negative rate in attack detection. Notice that the estimate 𝒅^k1\hat{{\bm{d}}}_{k-1} and the ground truth 𝒅k1{\bm{d}}_{k-1} satisfy the active constraint 𝒜¯k1𝒅^k1𝒃¯k1=0\bar{\mathcal{A}}_{k-1}\hat{{\bm{d}}}_{k-1}-\bar{{\bm{b}}}_{k-1}=0 in (19) and the inequality constraint 𝒜k1𝒅k1𝒃k1\mathcal{A}_{k-1}{\bm{d}}_{k-1}\leq{\bm{b}}_{k-1} in (4), respectively. However, it is uncertain whether the ground truth satisfies the active constraints or not. In this case, from (21) we have

𝔼[𝒅~k1]\displaystyle\mathbb{E}[\tilde{{\bm{d}}}_{k-1}] =𝜸k1d(𝒜¯k1𝒅k1𝒃¯k1)0.\displaystyle=\bm{\gamma}_{k-1}^{d}(\bar{\mathcal{A}}_{k-1}{\bm{d}}_{k-1}-\bar{{\bm{b}}}_{k-1})\neq 0. (28)

A similar statement holds for the state estimation error:

𝔼[𝒙~k]\displaystyle\mathbb{E}[\tilde{{\bm{x}}}_{k}] =𝜸kx(¯k𝒙k𝒄¯k)0.\displaystyle=\bm{\gamma}_{k}^{x}(\bar{\mathcal{B}}_{k}{{\bm{x}}}_{k}-\bar{{\bm{c}}}_{k})\neq 0. (29)

These considerations indicate that the projection potentially induces biased estimates, rendering the traditional stability analysis for unbiased estimation invalid. In this context, we will prove that estimation errors of the CARE are practically exponentially stable in mean square which will be proven in Section 4.2.

4.1 Performance Analysis

For the analysis of the performance through the projection, we first decompose the state estimation error 𝒙~k\tilde{{\bm{x}}}_{k} into two orthogonal spaces as follows:

𝒙~k\displaystyle\tilde{{\bm{x}}}_{k} =(𝑰𝜸kx¯k)𝒙~k+𝜸kx¯k𝒙~k.\displaystyle=({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})\tilde{{\bm{x}}}_{k}+\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}\tilde{{\bm{x}}}_{k}. (30)

We will show that the errors in the space 𝑰𝜸kx¯k{\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k} remain identical after the projection, while the errors in the space 𝜸kx¯k\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k} reduce through the projection, as in Lemma 5.

Lemma 5

The decomposition of 𝐱~k\tilde{{\bm{x}}}_{k} in the space 𝐈𝛄kx¯k{\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k} is equal to that of 𝐱~ku\tilde{{\bm{x}}}_{k}^{u}, and the decomposition of 𝐱~k\tilde{{\bm{x}}}_{k} in the space 𝛄kx¯k\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k} is equal to that of 𝐱~ku\tilde{{\bm{x}}}_{k}^{u} scaled by 𝛂k\bm{\alpha}_{k}, i.e.

(𝑰𝜸kx¯k)𝒙~k\displaystyle({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})\tilde{{\bm{x}}}_{k} =(𝑰𝜸kx¯k)𝒙~ku\displaystyle=({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})\tilde{{\bm{x}}}_{k}^{u} (31)
𝜸kx¯k𝒙~k\displaystyle\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}\tilde{{\bm{x}}}_{k} =𝜶k𝜸kx¯k𝒙~ku,\displaystyle=\bm{\alpha}_{k}\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}\tilde{{\bm{x}}}_{k}^{u}, (32)

where 𝛂k=diag(𝛂k1,,𝛂kn)\bm{\alpha}_{k}=\operatorname*{\textit{diag}}{(\bm{\alpha}_{k}^{1},\cdots,\bm{\alpha}_{k}^{n})}, and

𝜶ki(𝜸kx¯k𝒙~k)(i)((𝜸kx¯k𝒙~ku)(i))[0,1)\bm{\alpha}_{k}^{i}\triangleq(\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}\tilde{{\bm{x}}}_{k})(i)((\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}\tilde{{\bm{x}}}_{k}^{u})(i))^{\dagger}\in[0,1)

for i=1,,ni=1,\cdots,n. Similarly, it holds that (𝐈𝛄kd𝒜¯k)𝐝~k=(𝐈𝛄kd𝒜¯k)𝐝~ku({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})\tilde{{\bm{d}}}_{k}=({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})\tilde{{\bm{d}}}_{k}^{u} and 𝛄kd𝒜¯k𝐝~k=𝛋k𝛄kd𝒜¯k𝐝~ku\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\tilde{{\bm{d}}}_{k}=\bm{\kappa}_{k}\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\tilde{{\bm{d}}}_{k}^{u}, where 𝛋k=diag(𝛋k1,,𝛋kn)\bm{\kappa}_{k}=\operatorname*{\textit{diag}}{(\bm{\kappa}_{k}^{1},\cdots,\bm{\kappa}_{k}^{n})}, and 𝛋ki(𝛄kd𝒜¯k𝐝~k)(i)((𝛄kd𝒜¯k𝐝~ku)(i))[0,1)\bm{\kappa}_{k}^{i}\triangleq(\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\tilde{{\bm{d}}}_{k})(i)((\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\tilde{{\bm{d}}}_{k}^{u})(i))^{\dagger}\in[0,1) for i=1,,ni=1,\cdots,n.

Proof: The relationship in (31) can be obtained by applying ¯k𝒙^k=𝒄¯k\bar{\mathcal{B}}_{k}\hat{{\bm{x}}}_{k}=\bar{{\bm{c}}}_{k} to

𝒙~k\displaystyle\tilde{{\bm{x}}}_{k} =𝒙k𝒙^k=𝒙k(𝒙^ku𝜸kx(¯k𝒙^ku𝒄¯k))\displaystyle={\bm{x}}_{k}-\hat{{\bm{x}}}_{k}={\bm{x}}_{k}-(\hat{{\bm{x}}}_{k}^{u}-\bm{\gamma}_{k}^{x}(\bar{\mathcal{B}}_{k}\hat{{\bm{x}}}_{k}^{u}-\bar{{\bm{c}}}_{k}))
=𝒙~ku+𝜸kx(¯k𝒙^ku𝒄¯k)\displaystyle=\tilde{{\bm{x}}}_{k}^{u}+\bm{\gamma}_{k}^{x}(\bar{\mathcal{B}}_{k}\hat{{\bm{x}}}_{k}^{u}-\bar{{\bm{c}}}_{k})
=𝒙~ku+𝜸kx(¯k𝒙^ku¯k𝒙^k)\displaystyle=\tilde{{\bm{x}}}_{k}^{u}+\bm{\gamma}_{k}^{x}(\bar{\mathcal{B}}_{k}\hat{{\bm{x}}}_{k}^{u}-\bar{\mathcal{B}}_{k}\hat{{\bm{x}}}_{k})
=𝒙~ku𝜸kx(¯k𝒙~ku¯k𝒙~k),\displaystyle=\tilde{{\bm{x}}}_{k}^{u}-\bm{\gamma}_{k}^{x}(\bar{\mathcal{B}}_{k}\tilde{{\bm{x}}}_{k}^{u}-\bar{\mathcal{B}}_{k}\tilde{{\bm{x}}}_{k}),

which implies (31). The solution of ¯k𝒙𝒄¯k\bar{\mathcal{B}}_{k}{\bm{x}}\leq\bar{{\bm{c}}}_{k} defines a closed convex set 𝒞k{\mathcal{C}}_{k}. The point 𝒙^ku\hat{{\bm{x}}}_{k}^{u} is not an element of the convex set. The point 𝒙^k\hat{{\bm{x}}}_{k} has the minimum distance from 𝒙^ku\hat{{\bm{x}}}_{k}^{u} with metric d(a,b)ab𝑾kxd(a,b)\triangleq\|a-b\|_{{\bm{W}}_{k}^{x}} in the convex set 𝒞k{\mathcal{C}}_{k} by (25). Since the solution 𝒙^k\hat{{\bm{x}}}_{k} is in the closed set 𝒞k{\mathcal{C}}_{k}, and 𝜸kx¯k\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k} is a weighted projection with weight 𝑾kx{\bm{W}}_{k}^{x}, the relationship (32) holds. The statements for attack estimation errors can be proven by a similar procedure, which is omitted here. \blacksquare

With the results from Lemma 5, we can show that the projection reduces the estimation errors and the error covariances, as formulated in Theorem 6.

Theorem 6

CARE reduces the state and attack estimation errors and their error covariances from the unconstrained algorithm, i.e., 𝐱~k𝐱~ku\|\tilde{{\bm{x}}}_{k}\|\leq\|\tilde{{\bm{x}}}_{k}^{u}\| and 𝐝~k𝐝~ku\|\tilde{{\bm{d}}}_{k}\|\leq\|\tilde{{\bm{d}}}_{k}^{u}\|, 𝐏kx𝐏kx,u{\bm{P}}_{k}^{x}\leq{\bm{P}}_{k}^{x,u} and 𝐏kd𝐏kd,u{\bm{P}}_{k}^{d}\leq{\bm{P}}_{k}^{d,u}. Strict inequality holds if rank(¯k)0\operatorname*{rank}(\bar{\mathcal{B}}_{k})\neq 0, and rank(𝒜¯k)0\operatorname*{rank}(\bar{\mathcal{A}}_{k})\neq 0, respectively.

Proof: The statement for 𝒙~k𝒙~ku\|\tilde{{\bm{x}}}_{k}\|\leq\|\tilde{{\bm{x}}}_{k}^{u}\| is the direct result of Lemma 5, where strict inequality holds if 𝜶ki0\bm{\alpha}_{k}^{i}\neq 0 for some ii. The statement for 𝒅~k𝒅~ku\|\tilde{{\bm{d}}}_{k}\|\leq\|\tilde{{\bm{d}}}_{k}^{u}\| can be proved by a similar procedure. To show the rest of the properties, we first identify the equality

(𝑰𝜸kx¯k)𝜸kx¯k=0.\displaystyle({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})^{\top}\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}=0. (33)

Since we have ¯k𝜸kx=𝑰\bar{\mathcal{B}}_{k}\bm{\gamma}_{k}^{x}={\bm{I}} by (26), it holds that 𝜸kx¯k𝜸kx=𝜸kx\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}\bm{\gamma}_{k}^{x}=\bm{\gamma}_{k}^{x}, and ¯k𝜸kx¯k=¯k\bar{\mathcal{B}}_{k}\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}=\bar{\mathcal{B}}_{k}, i.e. 𝜸kx=¯k\bm{\gamma}_{k}^{x}=\bar{\mathcal{B}}_{k}^{\dagger}. Then, we have ¯k(𝜸kx)𝜸kx=𝜸kx\bar{\mathcal{B}}_{k}^{\top}(\bm{\gamma}_{k}^{x})^{\top}\bm{\gamma}_{k}^{x}=\bm{\gamma}_{k}^{x}, which implies 𝒙~k(𝑰𝜸kx¯k)𝜸kx¯k𝒙~k=𝒙~k(𝜸kx¯k¯k(𝜸kx)𝜸kx¯k)𝒙^k=0\tilde{{\bm{x}}}_{k}^{\top}({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})^{\top}\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}\tilde{{\bm{x}}}_{k}=\tilde{{\bm{x}}}_{k}^{\top}(\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}-\bar{\mathcal{B}}_{k}^{\top}(\bm{\gamma}_{k}^{x})^{\top}\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})\hat{{\bm{x}}}_{k}=0. Notice that (33) holds for (𝒙~ku)(𝑰𝜸kx¯k)𝜸kx¯k𝒙~ku=0(\tilde{{\bm{x}}}_{k}^{u})^{\top}({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})^{\top}\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}\tilde{{\bm{x}}}_{k}^{u}=0 as well. Similar to (23), we have 𝑷kx=(𝑰𝜸kx¯k)𝑷kx,u=𝑷kx,u𝜸kx¯k𝑷kx,u{\bm{P}}_{k}^{x}=({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}){\bm{P}}_{k}^{x,u}={\bm{P}}_{k}^{x,u}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}{\bm{P}}_{k}^{x,u}. Given that 𝜸kx¯k𝑷kx,u=𝑷kx,u¯k(¯k𝑷kx,u¯k)1¯k𝑷kx,u>0\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}{\bm{P}}_{k}^{x,u}={\bm{P}}_{k}^{x,u}\bar{\mathcal{B}}_{k}^{\top}(\bar{\mathcal{B}}_{k}{\bm{P}}_{k}^{x,u}\bar{\mathcal{B}}_{k}^{\top})^{-1}\bar{\mathcal{B}}_{k}{\bm{P}}_{k}^{x,u}>0 is positive definite, we have the desired result 𝑷kx<𝑷kx,u{\bm{P}}_{k}^{x}<{\bm{P}}_{k}^{x,u}. The relation for 𝑷kd{\bm{P}}_{k}^{d} can be obtained by a similar procedure. \blacksquare

The properties in Theorem 6 are desired for accurate estimation as well as attack detection. More specifically, since the false negative rate of a χ2\chi^{2} attack detector is a function of the estimate 𝝈^k\hat{\bm{\sigma}}_{k} and the covariance 𝚺k\bm{\Sigma}_{k} as in (1), more accurate estimations can reduce the false negative rate under the following assumption.

Assumption 7

In the presence of the attack (𝐝k0{\bm{d}}_{k}\neq 0), the following two conditions hold: (i) 𝐝~ku<12𝐝k\|\tilde{{\bm{d}}}^{u}_{k}\|<\frac{1}{2}\|{\bm{d}}_{k}\|, and (ii) the ground truth 𝐝k{\bm{d}}_{k} satisfies the condition 𝐝k(𝐏kd,u)1𝐝k>χdf2(α){\bm{d}}_{k}^{\top}({\bm{P}}_{k}^{d,u})^{-1}{\bm{d}}_{k}>\chi^{2}_{df}(\alpha).

Remark 8

Assumption 7 implies that the unconstrained attack estimation error 𝐝~ku\tilde{{\bm{d}}}^{u}_{k} is small with respect to the ground truth 𝐝k{\bm{d}}_{k}, and the normalized ground truth attack signal is larger than χdf2(α)\chi^{2}_{df}(\alpha); otherwise, it cannot be distinguished from the noise. Notice that Assumption 7 is only considered for smaller false negative rates (Theorem 9), but not for the estimation performance (Theorem 6) and stability analysis in Section 4.2, where we will show the stability of the attack estimation error 𝐝~k\tilde{{\bm{d}}}_{k} (Theorem 13) which renders the stability of 𝐝~ku\tilde{{\bm{d}}}^{u}_{k}.

According to (1), we denote the false negative rates of the proposed CARE and the unconstrained algorithm as Fneg({𝒅^k},{𝑷kd})F_{neg}(\{\hat{{\bm{d}}}_{k}\},\{{\bm{P}}_{k}^{d}\}) and Fneg({𝒅^ku},{𝑷kd,u})F_{neg}(\{\hat{{\bm{d}}}_{k}^{u}\},\{{\bm{P}}_{k}^{d,u}\}), respectively. The following Theorem 9 demonstrates that the false negative rate of CARE is less or equal to that of the unconstrained algorithm.

Theorem 9

Under Assumption 7, given a set of attack vectors {𝐝k}\{{\bm{d}}_{k}\}, the following inequality holds

Fneg({𝒅^k},{𝑷kd})\displaystyle F_{neg}(\{\hat{{\bm{d}}}_{k}\},\{{\bm{P}}_{k}^{d}\}) Fneg({𝒅^ku},{𝑷kd,u}).\displaystyle\leq F_{neg}(\{\hat{{\bm{d}}}_{k}^{u}\},\{{\bm{P}}_{k}^{d,u}\}). (34)

Proof: To prove (34) is equivalent to showing that the number of false negative test results of CARE is less or equal to that of the unconstrained algorithm

k(𝟏k)k(𝟏ku).\displaystyle\sum_{k}(\bm{1}_{k})\leq\sum_{k}(\bm{1}_{k}^{u}). (35)

If there is no projection (𝛄kd=0\bm{\gamma}_{k}^{d}=0), it holds that 𝐝^k=𝐝^ku\hat{{\bm{d}}}_{k}=\hat{{\bm{d}}}_{k}^{u} and 𝐏kd=𝐏kd,u{\bm{P}}_{k}^{d}={\bm{P}}_{k}^{d,u}. And, if there is no attack (𝐝k=0{\bm{d}}_{k}=0), it holds that 𝟏k=𝟏ku=0\bm{1}_{k}=\bm{1}_{k}^{u}=0. Therefore, we have

k𝒦0(𝟏k)=k𝒦0(𝟏ku),\displaystyle\sum_{k\in\mathcal{K}_{0}}(\bm{1}_{k})=\sum_{k\in\mathcal{K}_{0}}(\bm{1}_{k}^{u}), (36)

where 𝒦0{k𝛄kd=0 or 𝐝k=0}\mathcal{K}_{0}\triangleq\{k\mid\bm{\gamma}_{k}^{d}=0\textit{ or }{\bm{d}}_{k}=0\}. In the rest of the proof, we consider the case for k𝒦{k𝛄kd0 and 𝐝k0}k\in\mathcal{K}\triangleq\{k\mid\bm{\gamma}_{k}^{d}\neq 0\textit{ and }{\bm{d}}_{k}\neq 0\}. Rewriting the normalized test value from CARE by substituting 𝐏kd{\bm{P}}_{k}^{d} with (𝐈𝛄kd𝒜¯k)𝐏kd,u(𝐈𝛄kd𝒜¯k)({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}){\bm{P}}_{k}^{d,u}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{\top} according to (22), we have the following:

𝒅^k\displaystyle\hat{{\bm{d}}}_{k}^{\top} (𝑷kd)1𝒅^k=𝒅^k((𝑰𝜸kd𝒜¯k)𝑷kd,u(𝑰𝜸kd𝒜¯k))1𝒅^k\displaystyle({\bm{P}}_{k}^{d})^{-1}\hat{{\bm{d}}}_{k}=\hat{{\bm{d}}}_{k}^{\top}\big{(}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}){\bm{P}}_{k}^{d,u}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{\top}\big{)}^{-1}\hat{{\bm{d}}}_{k}
=\displaystyle= ((𝑰𝜸kd𝒜¯k)1𝒅^k)(𝑷kd,u)1((𝑰𝜸kd𝒜¯k)1𝒅^k)\displaystyle\big{(}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{-1}\hat{{\bm{d}}}_{k}\big{)}^{\top}({\bm{P}}_{k}^{d,u})^{-1}\big{(}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{-1}\hat{{\bm{d}}}_{k}\big{)}
=\displaystyle= (𝒅^ku+(𝑰𝜸kd𝒜¯k)1𝜸kd𝒃¯k)(𝑷kd,u)1\displaystyle\big{(}\hat{{\bm{d}}}_{k}^{u}+({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}\big{)}^{\top}({\bm{P}}_{k}^{d,u})^{-1}
×(𝒅^ku+(𝑰𝜸kd𝒜¯k)1𝜸kd𝒃¯k),\displaystyle\times\big{(}\hat{{\bm{d}}}_{k}^{u}+({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}\big{)}, (37)

where (𝐈𝛄kd𝒜¯k)1𝐝^k=𝐝^ku+(𝐈𝛄kd𝒜¯k)1𝛄kd𝐛¯k({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{-1}\hat{{\bm{d}}}_{k}=\hat{{\bm{d}}}_{k}^{u}+({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k} has been applied. Now we expand and rearrange (37), resulting in the following:

𝒅^k(𝑷kd)1𝒅^k=(𝒅^ku)(𝑷kd,u)1𝒅^ku\displaystyle\hat{{\bm{d}}}_{k}^{\top}({\bm{P}}_{k}^{d})^{-1}\hat{{\bm{d}}}_{k}=(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d,u})^{-1}\hat{{\bm{d}}}_{k}^{u}
+\displaystyle+ ((𝑰𝜸kd𝒜¯k)1𝜸kd𝒃¯k)(𝑷kd,u)1((𝑰𝜸kd𝒜¯k)1𝜸kd𝒃¯k)\displaystyle\big{(}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}\big{)}^{\top}({\bm{P}}_{k}^{d,u})^{-1}\big{(}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}\big{)}
+\displaystyle+ 2(𝒅^ku)(𝑷kd,u)1((𝑰𝜸kd𝒜¯k)1𝜸kd𝒃¯k)\displaystyle 2(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d,u})^{-1}(({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k})
=\displaystyle= (𝒅^ku)(𝑷kd,u)1𝒅^ku\displaystyle(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d,u})^{-1}\hat{{\bm{d}}}_{k}^{u}
+\displaystyle+ (𝜸kd𝒃¯k)((𝑰𝜸kd𝒜¯k)𝑷kd,u(𝑰𝜸kd𝒜¯k))1𝜸kd𝒃¯k\displaystyle\big{(}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}\big{)}^{\top}\big{(}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}){\bm{P}}_{k}^{d,u}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{\top}\big{)}^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}
+\displaystyle+ 2(𝒅^ku)((𝑰𝜸kd𝒜¯k)𝑷kd,u)1𝜸kd𝒃¯k.\displaystyle 2(\hat{{\bm{d}}}_{k}^{u})^{\top}\big{(}({\bm{I}}-\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}){\bm{P}}_{k}^{d,u}\big{)}^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}. (38)

Applying (22) and (23) to (38), we have

𝒅^k(𝑷kd)1𝒅^k=(𝒅^ku)(𝑷kd,u)1𝒅^ku\displaystyle\hat{{\bm{d}}}_{k}^{\top}({\bm{P}}_{k}^{d})^{-1}\hat{{\bm{d}}}_{k}=(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d,u})^{-1}\hat{{\bm{d}}}_{k}^{u}
+\displaystyle+ (𝜸kd𝒃¯k)(𝑷kd)1𝜸kd𝒃¯k+2(𝒅^ku)(𝑷kd)1𝜸kd𝒃¯kresidue(res.).\displaystyle\underbrace{\big{(}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}\big{)}^{\top}({\bm{P}}_{k}^{d})^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}+2(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d})^{-1}\bm{\gamma}_{k}^{d}\bar{{\bm{b}}}_{k}}_{\triangleq\ residue\ (res.)}. (39)

Since 𝐝^k\hat{{\bm{d}}}_{k} satisfies the input active constraint, we can substitute 𝐛¯k\bar{{\bm{b}}}_{k} with 𝒜¯k𝐝^k\bar{\mathcal{A}}_{k}\hat{{\bm{d}}}_{k}. Then the residue defined in (39) can be written as follows:

res.=(𝜸kd𝒜¯k𝒅^k)(𝑷kd)1𝜸kd𝒜¯k𝒅^k\displaystyle res.=\big{(}\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\hat{{\bm{d}}}_{k}\big{)}^{\top}({\bm{P}}_{k}^{d})^{-1}\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\hat{{\bm{d}}}_{k}
+2(𝒅^ku)(𝑷kd)1𝜸kd𝒜¯k𝒅^k.\displaystyle\ \ \ \ \ \ \ \ +2(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d})^{-1}\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\hat{{\bm{d}}}_{k}. (40)

Expanding and rearranging (40), we have the following:

res.=2𝒅k𝑷k𝒅k2𝒅~k𝑷k𝒅k2(𝒅~ku)𝑷k𝒅k\displaystyle res.=2{\bm{d}}_{k}^{\top}{\bm{P}}^{\prime}_{k}{\bm{d}}_{k}-2\tilde{{\bm{d}}}_{k}^{\top}{\bm{P}}^{\prime}_{k}{{\bm{d}}}_{k}-2(\tilde{{\bm{d}}}_{k}^{u})^{\top}{\bm{P}}^{\prime}_{k}{{\bm{d}}}_{k} (41)
+2𝒅~k𝑷k𝒅~k+𝜸kd𝒜¯k2𝒅~k(𝑷kd)1𝒅~k\displaystyle+2\tilde{{\bm{d}}}_{k}^{\top}{\bm{P}}^{\prime}_{k}\tilde{{\bm{d}}}_{k}+\|\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\|^{2}\tilde{{\bm{d}}}_{k}^{\top}({\bm{P}}_{k}^{d})^{-1}\tilde{{\bm{d}}}_{k} (42)
+𝜸kd𝒜¯k2𝒅k(𝑷kd)1𝒅k2𝜸kd𝒜¯k2𝒅k(𝑷kd)1𝒅~k,\displaystyle+\|\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\|^{2}{\bm{d}}_{k}^{\top}({\bm{P}}_{k}^{d})^{-1}{\bm{d}}_{k}-2\|\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k}\|^{2}{{\bm{d}}}_{k}^{\top}({\bm{P}}_{k}^{d})^{-1}\tilde{{\bm{d}}}_{k}, (43)

where 𝐏k(𝛄kd𝒜¯k)(𝐏kd)1>0{\bm{P}}^{\prime}_{k}\triangleq(\bm{\gamma}_{k}^{d}\bar{\mathcal{A}}_{k})^{\top}({\bm{P}}_{k}^{d})^{-1}>0. Using the result 𝐝~k<𝐝~ku\|\tilde{{\bm{d}}}_{k}\|<\|\tilde{{\bm{d}}}^{u}_{k}\| from Theorem 6 and the first inequality in Assumption 7, we obtain 𝐝~<𝐝u~<12𝐝\|\tilde{{\bm{d}}}\|<\|\tilde{{\bm{d}}^{u}}\|<\frac{1}{2}\|{\bm{d}}\|. Then we have res.>0res.>0, since (41) to (43) are positive, respectively. Therefore, from (39), we have

(𝒅^ku)(𝑷kd,u)1𝒅^ku<𝒅^k(𝑷kd)1𝒅^k.\displaystyle(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d,u})^{-1}\hat{{\bm{d}}}_{k}^{u}<\hat{{\bm{d}}}_{k}^{\top}({\bm{P}}_{k}^{d})^{-1}\hat{{\bm{d}}}_{k}. (44)

Considering the condition in (44), we can divide the set 𝒦=i=13𝒦i\mathcal{K}=\cup_{i=1}^{3}\mathcal{K}_{i} into three partitions as follows:

𝒦1\displaystyle\mathcal{K}_{1} {k(𝒅^ku)(𝑷kd,u)1𝒅^ku<𝒅^k(𝑷kd)1𝒅^kχdf2(α)}\displaystyle\triangleq\big{\{}k\mid(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d,u})^{-1}\hat{{\bm{d}}}_{k}^{u}<\hat{{\bm{d}}}_{k}^{\top}({\bm{P}}_{k}^{d})^{-1}\hat{{\bm{d}}}_{k}\leq\chi^{2}_{df}(\alpha)\big{\}}
𝒦2\displaystyle\mathcal{K}_{2} {kχdf2(α)<(𝒅^ku)(𝑷kd,u)1𝒅^ku<𝒅^k(𝑷kd)1𝒅^k}\displaystyle\triangleq\big{\{}k\mid\chi^{2}_{df}(\alpha)<(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d,u})^{-1}\hat{{\bm{d}}}_{k}^{u}<\hat{{\bm{d}}}_{k}^{\top}({\bm{P}}_{k}^{d})^{-1}\hat{{\bm{d}}}_{k}\big{\}}
𝒦3\displaystyle\mathcal{K}_{3} {k(𝒅^ku)(𝑷kd,u)1𝒅^kuχdf2(α)<𝒅^k(𝑷kd)1𝒅^k}.\displaystyle\triangleq\big{\{}k\mid(\hat{{\bm{d}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{d,u})^{-1}\hat{{\bm{d}}}_{k}^{u}\leq\chi^{2}_{df}(\alpha)<\hat{{\bm{d}}}_{k}^{\top}({\bm{P}}_{k}^{d})^{-1}\hat{{\bm{d}}}_{k}\big{\}}.

According to (2), we have

k𝒦i(𝟏k)\displaystyle\sum_{k\in\mathcal{K}_{i}}(\bm{1}_{k}) =k𝒦i(𝟏ku) for i=1,2 and\displaystyle=\sum_{k\in\mathcal{K}_{i}}(\bm{1}_{k}^{u})\textit{ for }i=1,2\textit{ and } (45)
k𝒦3(𝟏k)\displaystyle\sum_{k\in\mathcal{K}_{3}}(\bm{1}_{k}) <k𝒦3(𝟏ku).\displaystyle<\sum_{k\in\mathcal{K}_{3}}(\bm{1}_{k}^{u}). (46)

Therefore, from (36), (45) and (46) we conclude that (35) holds, which completes the proof. \blacksquare

4.2 Stability Analysis

Although the projection reduces the estimation errors and their error covariances as shown in Theorem 6, it trades the unbiased estimation off according to (28) and (29). In the absence of the projection, Algorithm 1 reduces to the algorithm in yong2016unified , which is an unbiased estimation, while the traditional stability analysis for unbiased estimation becomes invalid after the projection is applied.

To prove the recursive stability of the biased estimation, it is essential to construct a recursive relation between the current estimation error 𝒙~k\tilde{{\bm{x}}}_{k} and the previous estimation error 𝒙~k1\tilde{{\bm{x}}}_{k-1}. However, the construction is not straightforward compared to that in filtering with equality constraints simon2002kalman ; yong2015simultaneous or filtering without constraints anderson1981detectability ; yong2016unified . Especially, it is difficult to find the exact recursive relation between 𝒙~k\tilde{{\bm{x}}}_{k} and 𝒙~ku\tilde{{\bm{x}}}_{k}^{u}, since 𝒙~k\tilde{{\bm{x}}}_{k} is also a function of 𝒙^ku\hat{{\bm{x}}}^{u}_{k}, i.e. 𝒙~k=𝒙~ku𝜸kx(¯k𝒙^ku𝒄¯k)\tilde{{\bm{x}}}_{k}=\tilde{{\bm{x}}}^{u}_{k}-\bm{\gamma}_{k}^{x}(\bar{\mathcal{B}}_{k}\hat{{\bm{x}}}^{u}_{k}-\bar{{\bm{c}}}_{k}). Then, we have 𝒙~k(𝑰𝜸kx¯k)𝒙~ku\tilde{{\bm{x}}}_{k}\neq({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})\tilde{{\bm{x}}}^{u}_{k}, since the inequality ¯k𝒙k𝒄¯k\bar{\mathcal{B}}_{k}{\bm{x}}_{k}\leq\bar{{\bm{c}}}_{k} holds. To address this issue, we decompose the estimation error 𝒙~k\tilde{{\bm{x}}}_{k} into two orthogonal spaces as in (30). By Lemma 5, (30) becomes 𝒙~k=𝚪k𝒙~ku,\tilde{{\bm{x}}}_{k}=\bm{\Gamma}_{k}\tilde{{\bm{x}}}_{k}^{u}, where 𝚪k(𝑰𝜸kx¯k)+𝜶k𝜸kx¯k\bm{\Gamma}_{k}\triangleq({\bm{I}}-\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k})+\bm{\alpha}_{k}\bm{\gamma}_{k}^{x}\bar{\mathcal{B}}_{k}. Note that 𝜶k\bm{\alpha}_{k} is an unknown matrix and thus cannot be used for the algorithm. We use it only for analytical purposes. Now under the following assumptions, we present the stability analysis of the proposed Algorithm 1.

Assumption 10

We have rank(k)<n\operatorname*{rank}(\mathcal{B}_{k})<n k\forall k. There exist a¯\bar{a}, c¯y\bar{c}_{y}, g¯\bar{g}, m¯\bar{m}, q¯\underaccent{\bar}{q}, β¯\underaccent{\bar}{\beta}, β¯\bar{\beta} >0>0, such that the following holds for all k0k\geq 0:

𝑨ka¯,\displaystyle\|{\bm{A}}_{k}\|\leq\bar{a}, 𝑪kc¯y,\displaystyle\|{\bm{C}}_{k}\|\leq\bar{c}_{y}, 𝑮kg¯,\displaystyle\|{\bm{G}}_{k}\|\leq\bar{g},
𝑴km¯,\displaystyle\|{\bm{M}}_{k}\|\leq\bar{m}, 𝑸kq¯𝑰.\displaystyle{\bm{Q}}_{k}\geq\underaccent{\bar}{q}{\bm{I}}.
Remark 11

In Assumption 10, it is assumed that rank(k)<n\operatorname*{rank}(\mathcal{B}_{k})<n k\forall k, i.e., the number of the state constraints are less than the number of state variables. The rest of Assumption 10 is widely used in the literature on extended Kalman filtering kluge2010stochastic and nonlinear input and state estimation kim2017attack .

To show the boundedness of the unconstrained state error covariance 𝑷kx,u{\bm{P}}_{k}^{x,u}, we first define the matrices 𝑨¯k1(𝑰𝑮k1𝑴k𝑪k)𝑨k1\bar{{\bm{A}}}_{k-1}\triangleq({\bm{I}}-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{C}}_{k}){\bm{A}}_{k-1} and 𝑨~k1(𝑰𝑮k1𝑴k(𝑪k𝑮k1𝑴k)1𝑪k)𝑨¯k1𝚪¯k1\tilde{{\bm{A}}}_{k-1}\triangleq({\bm{I}}-{\bm{G}}_{k-1}{\bm{M}}_{k}({\bm{C}}_{k}{\bm{G}}_{k-1}{\bm{M}}_{k})^{-1}{\bm{C}}_{k})\bar{{\bm{A}}}_{k-1}\bar{\bm{\Gamma}}_{k-1}.

Theorem 12

Let the pair (𝐂k,𝐀~k1)({\bm{C}}_{k},\tilde{{\bm{A}}}_{k-1}) be uniformly detectable222Please refer to anderson1981detectability for the definition of uniform detectability., then the unconstrained state error covariance 𝐏kx,u{\bm{P}}_{k}^{x,u} is bounded, i.e., there exist non-negative constants p¯\underaccent{\bar}{p} and p¯\bar{p} such that p¯𝐈𝐏kx,up¯𝐈\underaccent{\bar}{p}{\bm{I}}\leq{\bm{P}}_{k}^{x,u}\leq\bar{p}{\bm{I}} for all kk.

Proof: The unconstrained state estimation error can be found by

𝒙~ku=\displaystyle\tilde{{\bm{x}}}^{u}_{k}= (𝑰𝑳k𝑪k)𝑨¯k1𝒙~k1\displaystyle({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{A}}}_{k-1}\tilde{{\bm{x}}}_{k-1}
+(𝑰𝑳k𝑪k)𝒘¯k1+𝑳¯k𝒗k,\displaystyle+({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{w}}}_{k-1}+\bar{{\bm{L}}}_{k}{\bm{v}}_{k}, (47)

where 𝒘¯k1(𝑰𝑮k1𝑴k𝑪k)𝒘k1\bar{{\bm{w}}}_{k-1}\triangleq({\bm{I}}-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{C}}_{k}){\bm{w}}_{k-1}, and 𝑳¯k𝑳k𝑪k𝑮k1𝑴k𝑳k𝑮k1𝑴k\bar{{\bm{L}}}_{k}\triangleq{\bm{L}}_{k}{\bm{C}}_{k}{\bm{G}}_{k-1}{\bm{M}}_{k}-{\bm{L}}_{k}-{\bm{G}}_{k-1}{\bm{M}}_{k}. Therefore, the update law of unconstrained covariance is calculated from (47) and (27) as follows:

𝑷kx,u=\displaystyle{\bm{P}}_{k}^{x,u}= (𝑰𝑳k𝑪k)𝑨¯k1𝚪¯k1𝑷k1x,u𝚪¯k1\displaystyle({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{A}}}_{k-1}\bar{\bm{\Gamma}}_{k-1}{\bm{P}}_{k-1}^{x,u}\bar{\bm{\Gamma}}_{k-1}^{\top}
×𝑨¯k1(𝑰𝑳k𝑪k)+𝑳¯k𝑹k𝑳¯k\displaystyle\times\bar{{\bm{A}}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}+\bar{{\bm{L}}}_{k}{\bm{R}}_{k}\bar{{\bm{L}}}_{k}^{\top}
+(𝑰𝑳k𝑪k)𝑸¯k1(𝑰𝑳k𝑪k),\displaystyle+({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{Q}}}_{k-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}, (48)

where 𝑸¯k1𝔼[𝒘¯k1(𝒘¯k1)]\bar{{\bm{Q}}}_{k-1}\triangleq\mathbb{E}[\bar{{\bm{w}}}_{k-1}(\bar{{\bm{w}}}_{k-1})^{\top}]. The covariance update law (48) is identical to the covariance update law of the Kalman filtering solution of the transformed system

𝒙k=𝑨¯k1𝚪¯k1𝒙k1+𝒘^k1𝒚k=𝑪k𝒙k+𝒗k,\displaystyle\begin{split}{\bm{x}}_{k}&=\bar{{\bm{A}}}_{k-1}\bar{\bm{\Gamma}}_{k-1}{\bm{x}}_{k-1}+\hat{{\bm{w}}}_{k-1}\\ {\bm{y}}_{k}&={\bm{C}}_{k}{\bm{x}}_{k}+{\bm{v}}_{k},\end{split} (49)

where 𝒘^k1𝑮k1𝑴k𝑪k𝒘k1𝑮k1𝑴k𝒗k+𝒘k1\hat{{\bm{w}}}_{k-1}\triangleq-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{C}}_{k}{\bm{w}}_{k-1}-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{v}}_{k}+{\bm{w}}_{k-1}. However, in the transformed system, the process noise and measurement noise are correlated, i.e., 𝔼[𝒘^k1𝒗k]=𝑮k1𝑴k𝑹k0\mathbb{E}[\hat{{\bm{w}}}_{k-1}{\bm{v}}_{k}^{\top}]=-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{R}}_{k}\neq 0. To decouple the noises, we add a zero term 𝒁k(𝒚k𝑪k(𝑨¯k1𝚪¯k1𝒙k+𝒘^k1)𝒗k){\bm{Z}}_{k}({\bm{y}}_{k}-{\bm{C}}_{k}(\bar{{\bm{A}}}_{k-1}\bar{\bm{\Gamma}}_{k-1}{\bm{x}}_{k}+\hat{{\bm{w}}}_{k-1})-{\bm{v}}_{k}) to the state equation in (49), and obtain the following:

𝒙k\displaystyle{\bm{x}}_{k} =𝑨~k1𝒙k1+𝒖~k1+𝒘~k1,\displaystyle=\tilde{{\bm{A}}}_{k-1}{\bm{x}}_{k-1}+\tilde{{\bm{u}}}_{k-1}+\tilde{{\bm{w}}}_{k-1},

where 𝑨~k1=(𝑰𝒁k𝑪k)𝑨¯k1𝚪¯k1\tilde{{\bm{A}}}_{k-1}=({\bm{I}}-{\bm{Z}}_{k}{\bm{C}}_{k})\bar{{\bm{A}}}_{k-1}\bar{\bm{\Gamma}}_{k-1}, 𝒖~k1𝒁k𝒚k\tilde{{\bm{u}}}_{k-1}\triangleq{\bm{Z}}_{k}{\bm{y}}_{k} is the known input, and 𝒘~k1(𝑰𝒁k𝑪k)𝒘^k1𝒁k𝒗k\tilde{{\bm{w}}}_{k-1}\triangleq({\bm{I}}-{\bm{Z}}_{k}{\bm{C}}_{k})\hat{{\bm{w}}}_{k-1}-{\bm{Z}}_{k}{\bm{v}}_{k} is the new process noise. The new process noise and the measurement noise could be decoupled by choosing the gain 𝒁k{\bm{Z}}_{k} such that 𝔼[𝒘~k1𝒗k]=0\mathbb{E}[\tilde{{\bm{w}}}_{k-1}{\bm{v}}_{k}^{\top}]=0. The solution can be found by 𝒁k=𝑮k1𝑴k(𝑪k𝑮k1𝑴k)1{\bm{Z}}_{k}={\bm{G}}_{k-1}{\bm{M}}_{k}({\bm{C}}_{k}{\bm{G}}_{k-1}{\bm{M}}_{k})^{-1}. Then, the system (49) becomes

𝒙k\displaystyle{\bm{x}}_{k} =𝑨~k1𝒙k1+𝒖~k1+𝒘~k1\displaystyle=\tilde{{\bm{A}}}_{k-1}{\bm{x}}_{k-1}+\tilde{{\bm{u}}}_{k-1}+\tilde{{\bm{w}}}_{k-1}
𝒚k\displaystyle{\bm{y}}_{k} =𝑪k𝒙k+𝒗k.\displaystyle={\bm{C}}_{k}{\bm{x}}_{k}+{\bm{v}}_{k}.

Since the pair (𝑪k,𝑨~k1)({\bm{C}}_{k},\tilde{{\bm{A}}}_{k-1}) is uniformly detectable, by Theorem 5.2 in anderson1981detectability , the statement holds. \blacksquare

Theorem 12 shows that the uniform detectability of the transformed system is one of the sufficient conditions of boundedness of 𝑷kx,u{\bm{P}}_{k}^{x,u}. Under the assumption of boundedness of 𝑷kx,u{\bm{P}}_{k}^{x,u} from Theorem 12, we show the constrained estimation errors 𝒙~k\tilde{{\bm{x}}}_{k} and 𝒅~k\tilde{{\bm{d}}}_{k} are practically exponentially stable in mean square as in Theorem 13.

Theorem 13

Consider Assumption 10 and assume that there exist non-negative constants p¯\underaccent{\bar}{p} and p¯\bar{p} such that p¯𝐈𝐏kx,up¯𝐈\underaccent{\bar}{p}{\bm{I}}\leq{\bm{P}}_{k}^{x,u}\leq\bar{p}{\bm{I}} holds for all kk. Then the estimation errors 𝐱~k\tilde{{\bm{x}}}_{k} and 𝐝~k\tilde{{\bm{d}}}_{k} are practically exponentially stable in mean square, i.e., there exist constants ax,ad,bx,bd,cx,cda_{x},a_{d},b_{x},b_{d},c_{x},c_{d} such that for all kk

𝔼[𝒙~k2]\displaystyle\mathbb{E}[\|\tilde{{\bm{x}}}_{k}\|^{2}] axebxk𝔼[𝒙~02]+cx\displaystyle\leq a_{x}e^{-b_{x}k}\mathbb{E}[\|\tilde{{\bm{x}}}_{0}\|^{2}]+c_{x}
𝔼[𝒅~k2]\displaystyle\mathbb{E}[\|\tilde{{\bm{d}}}_{k}\|^{2}] adebdk𝔼[𝒅~02]+cd.\displaystyle\leq a_{d}e^{-b_{d}k}\mathbb{E}[\|\tilde{{\bm{d}}}_{0}\|^{2}]+c_{d}.

Proof: Consider the Lyapunov function Vk=(𝒙~ku)(𝑷kx,u)1(𝒙~ku).V_{k}=(\tilde{{\bm{x}}}^{u}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}(\tilde{{\bm{x}}}^{u}_{k}). After substituting (47) into the Lyapunov function, we obtain

Vk=\displaystyle V_{k}= (𝒙~k1u)𝚪k1𝑨¯k1(𝑰𝑳k𝑪k)(𝑷kx,u)1\displaystyle(\tilde{{\bm{x}}}^{u}_{k-1})^{\top}\bm{\Gamma}_{k-1}^{\top}\bar{{\bm{A}}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}
×(𝑰𝑳k𝑪k)𝑨¯k1𝚪k1𝒙~k1u\displaystyle\times({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{A}}}_{k-1}\bm{\Gamma}_{k-1}\tilde{{\bm{x}}}^{u}_{k-1}
+2(𝒙~k1u)𝚪k1𝑨¯k1(𝑰𝑳k𝑪k)\displaystyle+2(\tilde{{\bm{x}}}^{u}_{k-1})^{\top}\bm{\Gamma}_{k-1}^{\top}\bar{{\bm{A}}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}
×(𝑷kx,u)1(𝑰𝑳k𝑪k)𝒘¯k1\displaystyle\times({\bm{P}}_{k}^{x,u})^{-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{w}}}_{k-1}
+2(𝒙~k1u)𝚪k1𝑨¯k1(𝑰𝑳k𝑪k)(𝑷kx,u)1𝑳¯k𝒗k\displaystyle+2(\tilde{{\bm{x}}}^{u}_{k-1})^{\top}\bm{\Gamma}_{k-1}^{\top}\bar{{\bm{A}}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}\bar{{\bm{L}}}_{k}{\bm{v}}_{k}
+𝒘¯k1(𝑰𝑳k𝑪k)(𝑷kx,u)1(𝑰𝑳k𝑪k)𝒘¯k1\displaystyle+\bar{{\bm{w}}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{w}}}_{k-1}
+2𝒘k1(𝑰𝑳k𝑪k)(𝑷kx,u)1𝑳¯k𝒗k\displaystyle+2{\bm{w}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}\bar{{\bm{L}}}_{k}{\bm{v}}_{k}
+𝒗k𝑳¯k(𝑷kx,u)1𝑳¯k𝒗k.\displaystyle+{\bm{v}}_{k}^{\top}\bar{{\bm{L}}}_{k}({\bm{P}}_{k}^{x,u})^{-1}\bar{{\bm{L}}}_{k}{\bm{v}}_{k}. (50)

By the uncorrelatedness property papoulis2002probability of 𝒘k1{\bm{w}}_{k-1}, 𝒗k{\bm{v}}_{k} and 𝒙~k1u\tilde{{\bm{x}}}^{u}_{k-1}, the Lyapunov function (13) becomes

𝔼[Vk]\displaystyle\mathbb{E}[V_{k}] =𝔼[(𝒙~k1u)𝚪k1𝑨¯k1(𝑰𝑳k𝑪k)(𝑷kx,u)1\displaystyle=\mathbb{E}[(\tilde{{\bm{x}}}_{k-1}^{u})^{\top}\bm{\Gamma}_{k-1}^{\top}\bar{{\bm{A}}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}
×𝑨¯k1(𝑰𝑳k𝑪k)𝚪k1(𝒙~k1u)]\displaystyle\times\bar{{\bm{A}}}_{k-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bm{\Gamma}_{k-1}(\tilde{{\bm{x}}}_{k-1}^{u})]
+𝔼[𝒘¯k1(𝑰𝑳k𝑪k)(𝑷kx,u)1(𝑰𝑳k𝑪k)𝒘¯k1]\displaystyle+\mathbb{E}[\bar{{\bm{w}}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{w}}}_{k-1}]
+𝔼[𝒗k𝑳¯k(𝑷kx,u)1𝑳¯k𝒗k].\displaystyle+\mathbb{E}[{\bm{v}}_{k}^{\top}\bar{{\bm{L}}}_{k}({\bm{P}}_{k}^{x,u})^{-1}\bar{{\bm{L}}}_{k}{\bm{v}}_{k}]. (51)

The following statements are formulated to deal with each term in (51).

Claim 14

There exists a constant δ(q¯a¯2p¯+1)1(0,1)\delta\triangleq(\frac{\underaccent{\bar}{q}^{\prime}}{\bar{a}^{\prime 2}\bar{p}}+1)^{-1}\in(0,1), such that 𝚪k1𝐀¯k1(𝐈𝐋k𝐂k)(𝐏kx,u)1(𝐈𝐋k𝐂k)𝐀¯k1𝚪k1<δ(𝐏k1x,u)1\bm{\Gamma}_{k-1}^{\top}\bar{{\bm{A}}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{A}}}_{k-1}\bm{\Gamma}_{k-1}<\delta({\bm{P}}_{k-1}^{x,u})^{-1}.

Proof: Since rank(k)<n\operatorname*{rank}(\mathcal{B}_{k})<n k\forall k, it holds that rank(¯k)<n\operatorname*{rank}(\bar{\mathcal{B}}_{k})<n k\forall k and thus 𝚪¯0\bar{\bm{\Gamma}}\neq 0. Therefore, 𝚪¯k1=1\|\bar{\bm{\Gamma}}_{k-1}\|=1 because 𝜸k1x¯k1\bm{\gamma}_{k-1}^{x}\bar{\mathcal{B}}_{k-1} is a projection matrix. From Assumption 10 and Theorem 12, we have 𝑸¯k1q¯𝑰\bar{{\bm{Q}}}_{k-1}\geq\underaccent{\bar}{q}^{\prime}{\bm{I}}, and 𝑷k1xp¯𝑰{\bm{P}}_{k-1}^{x}\leq\bar{p}{\bm{I}}. Since 𝑨¯k1\|\bar{{\bm{A}}}_{k-1}\| is upper bounded by a¯a¯(1+g¯m¯c¯y)\bar{a}^{\prime}\triangleq\bar{a}(1+\bar{g}\bar{m}\bar{c}_{y}), we can have 𝑨¯k1𝑨¯k1a¯2𝑰\bar{{\bm{A}}}_{k-1}\bar{{\bm{A}}}_{k-1}^{\top}\leq\bar{a}^{\prime 2}{\bm{I}}. Then we have

𝑸¯k1\displaystyle\bar{{\bm{Q}}}_{k-1} q¯𝑨¯k1𝑨¯k1a¯2q¯a¯2𝑨¯k1𝚪¯k1𝚪¯k1𝑨¯k1\displaystyle\geq\underaccent{\bar}{q}^{\prime}\frac{\bar{{\bm{A}}}_{k-1}\bar{{\bm{A}}}_{k-1}^{\top}}{\bar{a}^{\prime 2}}\geq\frac{\underaccent{\bar}{q}^{\prime}}{\bar{a}^{\prime 2}}\bar{{\bm{A}}}_{k-1}\bar{\bm{\Gamma}}_{k-1}\bar{\bm{\Gamma}}_{k-1}^{\top}\bar{{\bm{A}}}_{k-1}^{\top}
q¯a¯2p¯𝑨¯k1𝚪¯k1𝑷k1x,u𝚪¯k1𝑨¯k1.\displaystyle\geq\frac{\underaccent{\bar}{q}^{\prime}}{\bar{a}^{\prime 2}\bar{p}}\bar{{\bm{A}}}_{k-1}\bar{\bm{\Gamma}}_{k-1}{\bm{P}}_{k-1}^{x,u}\bar{\bm{\Gamma}}_{k-1}^{\top}\bar{{\bm{A}}}_{k-1}^{\top}. (52)

Substitution of (52) into (48) yields

𝑷kx,u(1+q¯a¯2p¯)(𝑰𝑳k𝑪k)𝑨¯k1𝚪¯k1𝑷k1x,u𝚪¯k1𝑨¯k1\displaystyle{\bm{P}}_{k}^{x,u}-(1+\frac{\underaccent{\bar}{q}^{\prime}}{\bar{a}^{\prime 2}\bar{p}})({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{A}}}_{k-1}\bar{\bm{\Gamma}}_{k-1}{\bm{P}}_{k-1}^{x,u}\bar{\bm{\Gamma}}_{k-1}^{\top}\bar{{\bm{A}}}_{k-1}^{\top}
×(𝑰𝑳k𝑪k)>0,\displaystyle\times({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}>0, (53)

where the inequality holds because 𝑹k>0{\bm{R}}_{k}>0. As (1+q¯a¯2p¯)𝑷k1x,u>0(1+\frac{\underaccent{\bar}{q}^{\prime}}{\bar{a}^{\prime 2}\bar{p}}){\bm{P}}_{k-1}^{x,u}>0, the inverse of the left hand side of (14) exists and is symmetric positive definite. By the matrix inversion lemma tylavsky1986generalization , it follows that

(1+q¯a¯2p¯)1(𝑷k1x,u)1𝚪¯k1𝑨¯k1(𝑰𝑳k𝑪k)\displaystyle(1+\frac{\underaccent{\bar}{q}^{\prime}}{\bar{a}^{\prime 2}\bar{p}})^{-1}({\bm{P}}_{k-1}^{x,u})^{-1}-\bar{\bm{\Gamma}}_{k-1}^{\top}\bar{{\bm{A}}}_{k-1}^{\top}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}
×(𝑷kx,u)1(𝑰𝑳k𝑪k)𝑨¯k1𝚪¯k1>0.\displaystyle\times({\bm{P}}_{k}^{x,u})^{-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\bar{{\bm{A}}}_{k-1}\bar{\bm{\Gamma}}_{k-1}>0. (54)

Since 𝜸k1x¯k1\bm{\gamma}_{k-1}^{x}\bar{\mathcal{B}}_{k-1} is a positive definite matrix, and 𝜶k11\|\bm{\alpha}_{k-1}\|\leq 1, we have

𝑰𝜸k1x¯k1\displaystyle{\bm{I}}-\bm{\gamma}_{k-1}^{x}\bar{\mathcal{B}}_{k-1} 𝚪k1\displaystyle\leq\bm{\Gamma}_{k-1}
=𝑰𝜸k1x¯k1+𝜶k1𝜸k1x¯k1𝑰,\displaystyle={\bm{I}}-\bm{\gamma}_{k-1}^{x}\bar{\mathcal{B}}_{k-1}+\bm{\alpha}_{k-1}\bm{\gamma}_{k-1}^{x}\bar{\mathcal{B}}_{k-1}\leq{\bm{I}},

which implies 𝚪k11\|\bm{\Gamma}_{k-1}\|\leq 1. Since 𝚪¯k1=1\|\bar{\bm{\Gamma}}_{k-1}\|=1 and 𝚪k11\|\bm{\Gamma}_{k-1}\|\leq 1, inequality (54) proves the claim. \square

Claim 15

There exists a positive constant cp¯(1+l¯c¯y)2(1+g¯m¯c¯2)2q¯rank(𝐐k1)+p¯(l¯c¯yg¯m¯l¯g¯m¯)2r2¯rank(𝐑k)c\triangleq\bar{p}(1+\bar{l}\bar{c}_{y})^{2}(1+\bar{g}\bar{m}\bar{c}_{2})^{2}\bar{q}\ \operatorname*{rank}({\bm{Q}}_{k-1})+\bar{p}(\bar{l}\bar{c}_{y}\bar{g}\bar{m}-\bar{l}-\bar{g}\bar{m})^{2}\bar{r_{2}}\ \operatorname*{rank}({\bm{R}}_{k}), such that

𝔼[(𝑰𝑳k𝑪k)(𝑷kx,u)1(𝑰𝑳k𝑪k)𝒘¯k12]\displaystyle\mathbb{E}[\|({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\|\|\bar{{\bm{w}}}_{k-1}\|^{2}]
+𝔼[𝑳¯k(𝑷kx,u)1𝑳¯k𝒗k]2]c.\displaystyle+\mathbb{E}[\|\bar{{\bm{L}}}_{k}({\bm{P}}_{k}^{x,u})^{-1}\bar{{\bm{L}}}_{k}\|\|{\bm{v}}_{k}]\|^{2}]\leq c.

Proof: The first term is bounded by

𝔼\displaystyle\mathbb{E} [(𝑰𝑳k𝑪k)(𝑷kx,u)1(𝑰𝑳k𝑪k)𝒘¯k12]\displaystyle[\|({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\|\|\bar{{\bm{w}}}_{k-1}\|^{2}]
=\displaystyle= 𝔼[(𝑰𝑳k𝑪k)(𝑷kx,u)1(𝑰𝑳k𝑪k)\displaystyle\mathbb{E}[\|({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})^{\top}({\bm{P}}_{k}^{x,u})^{-1}({\bm{I}}-{\bm{L}}_{k}{\bm{C}}_{k})\|
(𝑰𝑮k1𝑴k𝑪k)2𝒘k12]\displaystyle\|({\bm{I}}-{\bm{G}}_{k-1}{\bm{M}}_{k}{\bm{C}}_{k})\|^{2}\|{\bm{w}}_{k-1}\|^{2}]
p¯(1+l¯c¯y)2(1+g¯m¯c¯2)2q¯rank(𝑸k1),\displaystyle\leq\bar{p}(1+\bar{l}\bar{c}_{y})^{2}(1+\bar{g}\bar{m}\bar{c}_{2})^{2}\bar{q}\ \operatorname*{rank}({\bm{Q}}_{k-1}),

where we apply 𝐰k12=tr(𝐰k1𝐰k1)q¯rank(𝐐k1)\|{\bm{w}}_{k-1}\|^{2}=\operatorname*{tr}({\bm{w}}_{k-1}{\bm{w}}_{k-1}^{\top})\leq\bar{q}\ \operatorname*{rank}({\bm{Q}}_{k-1}). Likewise, the second term is bounded by

𝔼\displaystyle\mathbb{E} [𝑳¯k(𝑷kx,u)1𝑳¯k𝒗k]2]\displaystyle[\|\bar{{\bm{L}}}_{k}({\bm{P}}_{k}^{x,u})^{-1}\bar{{\bm{L}}}_{k}\|\|{\bm{v}}_{k}]\|^{2}]
p¯(l¯c¯yg¯m¯+l¯+g¯m¯)2r2¯rank(𝑹k).\displaystyle\leq\bar{p}(\bar{l}\bar{c}_{y}\bar{g}\bar{m}+\bar{l}+\bar{g}\bar{m})^{2}\bar{r_{2}}\ \operatorname*{rank}({\bm{R}}_{k}).

These complete the proof. \square

Through Claims 14 and 15, (51) becomes

𝔼[Vk]\displaystyle\mathbb{E}[V_{k}] δ𝔼[Vk1]+c.\displaystyle\leq\delta\mathbb{E}[V_{k-1}]+c.

By recursively applying the above relation, we have

𝔼[Vk]\displaystyle\mathbb{E}[V_{k}] δk𝔼[V0]+i=0k1δicδk𝔼[V0]+i=0δic\displaystyle\leq\delta^{k}\mathbb{E}[V_{0}]+\sum_{i=0}^{k-1}\delta^{i}c\leq\delta^{k}\mathbb{E}[V_{0}]+\sum_{i=0}^{\infty}\delta^{i}c
=δk𝔼[V0]+c1δ,\displaystyle=\delta^{k}\mathbb{E}[V_{0}]+\frac{c}{1-\delta},

which implies practical exponential stability of the estimation error

𝔼[𝒙~ku2]\displaystyle\mathbb{E}[\|\tilde{{\bm{x}}}_{k}^{u}\|^{2}] p¯p¯δk𝔼[𝒙~0u2]+cp¯(1δ)\displaystyle\leq\frac{\bar{p}}{\underaccent{\bar}{p}}\delta^{k}\mathbb{E}[\|\tilde{{\bm{x}}}_{0}^{u}\|^{2}]+\frac{c\bar{p}}{(1-\delta)}
=axebxk𝔼[𝒙~0u2]+cx,\displaystyle=a_{x}^{\prime}e^{-b_{x}^{\prime}k}\mathbb{E}[\|\tilde{{\bm{x}}}_{0}^{u}\|^{2}]+c_{x}^{\prime},

where (𝐱~ku)(𝐏kx)1(𝐱~ku)λmin((𝐏kx)1)𝐱~ku21p¯𝐱~ku2(\tilde{{\bm{x}}}_{k}^{u})^{\top}({\bm{P}}_{k}^{x})^{-1}(\tilde{{\bm{x}}}_{k}^{u})\geq\lambda_{\min}{(({\bm{P}}_{k}^{x})^{-1})}\|\tilde{{\bm{x}}}_{k}^{u}\|^{2}\geq\frac{1}{\bar{p}}\|\tilde{{\bm{x}}}_{k}^{u}\|^{2} and (𝐱~0u)(𝐏0x)1𝐱~0uλmax((𝐏0x)1)𝐱~0u21p¯𝐱~0u2(\tilde{{\bm{x}}}_{0}^{u})^{\top}({\bm{P}}_{0}^{x})^{-1}\tilde{{\bm{x}}}_{0}^{u}\leq\lambda_{\max}{(({\bm{P}}_{0}^{x})^{-1})}\|\tilde{{\bm{x}}}_{0}^{u}\|^{2}\leq\frac{1}{\underaccent{\bar}{p}}\|\tilde{{\bm{x}}}_{0}^{u}\|^{2} have been applied. Constants are defined by

axp¯p¯,\displaystyle a_{x}^{\prime}\triangleq\frac{\bar{p}}{\underaccent{\bar}{p}}, bxln(1+q¯h¯2a¯2p¯)\displaystyle b_{x}^{\prime}\triangleq\ln(1+\frac{\underaccent{\bar}{q}^{\prime}}{\bar{h}^{2}\bar{a}^{\prime 2}\bar{p}}) cxcp¯(1δ).\displaystyle c_{x}^{\prime}\triangleq\frac{c\bar{p}}{(1-\delta)}.

Since 𝐱~k\tilde{{\bm{x}}}_{k} is a linear transformation of 𝐱~ku\tilde{{\bm{x}}}_{k}^{u}, the same stability holds for 𝐱~k\tilde{{\bm{x}}}_{k}. Likewise, the same stability holds for 𝐝~k\tilde{{\bm{d}}}_{k} in (21) because it is a linear transformation of 𝐱~k\tilde{{\bm{x}}}_{k}. We omit its details. \blacksquare

5 Illustrative Example

In this example, we test Algorithm 1 on a vehicle model with input and state constraints and compare the estimation accuracy and the detection performance with an unconstrained algorithm.

Table 1: Performance comparison.
k𝒙~k\sum_{k}\|\tilde{{\bm{x}}}_{k}\| k𝒅~k\sum_{k}\|\tilde{{\bm{d}}}_{k}\| ktr(𝑷kx)\sum_{k}\|\operatorname*{tr}({\bm{P}}^{x}_{k})\| ktr(𝑷kd)\sum_{k}\|\operatorname*{tr}({\bm{P}}^{d}_{k})\|
CARE 88.928 672.914 0.455 27.351
ISE 123.623 1041.837 0.613 40.577

The summation ranges from k=100k=100 to k=1000k=1000 due to the large initialization (10410^{4}-scale), as shown in Fig. 4.

5.1 Experimental Setup

Refer to caption
Figure 2: Kinematic Bicycle Model.

We consider a kinematic bicycle model (Fig. 2) in rajamani2011vehicle . The nonlinear continuous-time model is given as

x˙\displaystyle\dot{x} =vcos(ψ+β)\displaystyle=v\cos(\psi+\beta)
y˙\displaystyle\dot{y} =vsin(ψ+β)\displaystyle=v\sin(\psi+\beta)
ψ˙\displaystyle\dot{\psi} =vlrsin(β)\displaystyle=\frac{v}{l_{r}}\sin(\beta)
v˙\displaystyle\dot{v} =a\displaystyle=a
β\displaystyle\beta =arctan(lrlf+lrtan(δ)),\displaystyle=\arctan\Big{(}\frac{l_{r}}{l_{f}+l_{r}}\tan(\delta)\Big{)},

where xx and yy are the coordinates of the center of mass, vv is the velocity of the center of mass, β\beta is the angle of the velocity vv with respect to the longitudinal axis of the vehicle, aa is the acceleration, ψ\psi is the heading angle of the vehicle, δ\delta is the steering angle of the front wheel, and lfl_{f} and lrl_{r} represent the distance from the center of mass of the vehicle to the front and rear axles, respectively.

Since the proposed algorithm is for linear discrete-time systems, we perform the linearization and discretization as in law2018robust with sampling time Ts=0.01sT_{s}=0.01s. We rewrite the system in the form of (3), where 𝒙k=[xk,yk,ψk,vk]{\bm{x}}_{k}=[x_{k},y_{k},\psi_{k},v_{k}]^{\top} is the state vector, 𝒖k=[βku,aku]=[arctan(lrlf+lrtan(δku)),aku]{\bm{u}}_{k}=[\beta_{k}^{u},a_{k}^{u}]^{\top}=\Big{[}\arctan\Big{(}\frac{l_{r}}{l_{f}+l_{r}}\tan(\delta_{k}^{u})\Big{)},a_{k}^{u}\Big{]}^{\top} is the input vector, and 𝒅k=[βkd,akd]=[arctan(lrlf+lrtan(δkd)),akd]{\bm{d}}_{k}=[\beta_{k}^{d},a_{k}^{d}]^{\top}=\Big{[}\arctan\Big{(}\frac{l_{r}}{l_{f}+l_{r}}\tan(\delta_{k}^{d})\Big{)},a_{k}^{d}\Big{]}^{\top} is the attack input vector. We consider the scenario that attack input is injected into the input, i.e. 𝑮k=𝑩k{\bm{G}}_{k}={\bm{B}}_{k}. The system matrices are given as follows:

𝑨k=[100Ts01vkTs000100001],𝑩k=𝑮k=[00vkTs0vkTslr00Ts],\displaystyle{\bm{A}}_{k}=\begin{bmatrix}1&0&0&T_{s}\\ 0&1&v_{k}T_{s}&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix},\ {\bm{B}}_{k}={\bm{G}}_{k}=\begin{bmatrix}0&0\\ v_{k}T_{s}&0\\ \frac{v_{k}T_{s}}{l_{r}}&0\\ 0&T_{s}\end{bmatrix},

and 𝑪k=𝑰{\bm{C}}_{k}={\bm{I}}. The noise covariances 𝑸k{\bm{Q}}_{k} and 𝑹k{\bm{R}}_{k} are considered as diagonal matrices with diag(𝑸k)=[0.1,0.1,0.001,0.0001]\operatorname*{\textit{diag}}({\bm{Q}}_{k})=[0.1,0.1,0.001,0.0001] and diag(𝑹k)=[0.01,0.01,0.001,0.00001]\operatorname*{\textit{diag}}({\bm{R}}_{k})=[0.01,0.01,0.001,0.00001].

The vehicle is assumed to have state constraints on the location 0xk200\leq x_{k}\leq 20, 0yk50\leq y_{k}\leq 5 and the velocity 0vk220\leq v_{k}\leq 22, and input constraints on the steering angle |δ|1.0472|\delta|\leq 1.0472 and the acceleration |a|3.5|a|\leq 3.5.

The unknown attack signals are

δkd\displaystyle\delta_{k}^{d} ={0,0k<1001.1sin(0.05k),0k<100\displaystyle=\begin{cases}0,&0\leq k<100\\ 1.1\sin(0.05k),&0\leq k<100\end{cases}
akd\displaystyle a_{k}^{d} ={00k<1003.5,100nk<100(n+1)3.5,100(n+1)k<100(n+2),\displaystyle=\begin{cases}0&0\leq k<100\\ 3.5,&100n\leq k<100(n+1)\\ -3.5,&100(n+1)\leq k<100(n+2)\end{cases},

where n=1,2,,5n=1,2,\cdots,5.

The constraints on the vehicle can be formulated by inequality constraints as in (4):

[10100101]𝒜k1[δk1dak1d]\displaystyle\underbrace{\begin{bmatrix}1&0\\ -1&0\\ 0&1\\ 0&-1\end{bmatrix}}_{\mathcal{A}_{k-1}}\begin{bmatrix}\delta_{k-1}^{d}\\ a_{k-1}^{d}\end{bmatrix} [1.0472δk1u1.0472+δk1u3.5ak1u3.5+ak1u]𝒃k1\displaystyle\leq\underbrace{\begin{bmatrix}1.0472-\delta_{k-1}^{u}\\ 1.0472+\delta_{k-1}^{u}\\ 3.5-a_{k-1}^{u}\\ 3.5+a_{k-1}^{u}\end{bmatrix}}_{{\bm{b}}_{k-1}}
[100010000100010000010001]k[xkykψkvk]\displaystyle\underbrace{\begin{bmatrix}1&0&0&0\\ -1&0&0&0\\ 0&1&0&0\\ 0&-1&0&0\\ 0&0&0&1\\ 0&0&0&-1\end{bmatrix}}_{\mathcal{B}_{k}}\begin{bmatrix}x_{k}\\ y_{k}\\ \psi_{k}\\ v_{k}\end{bmatrix} [20050220]𝒄k.\displaystyle\leq\underbrace{\begin{bmatrix}20\\ 0\\ 5\\ 0\\ 22\\ 0\end{bmatrix}}_{{\bm{c}}_{k}}.

To reduce the effect of instantaneous noises, the cumulative sum algorithm (CUSUM) is adopted lai1995sequential . The χ2\chi^{2} test is utilized in a cumulative form. The χ2\chi^{2} CUSUM detector is characterized by the detector state Sk+S_{k}\in\mathbb{R}_{+}:

Sk=ϕSk1+𝒅^k𝑷k1𝒅^k,S0=0,\displaystyle S_{k}=\phi S_{k-1}+\hat{\bm{d}}_{k}^{\top}\bm{P}_{k}^{-1}\hat{\bm{d}}_{k},\quad S_{0}=0, (55)

where 0<ϕ<10<\phi<1 is the pre-determined forgetting rate. At each time kk, the CUSUM detector (55) is used to update the detector state SkS_{k} and detect the attack. In particular, we conclude that the attack is presented if

Sk>i=0ϕiχdf2(α)=χdf2(α)1ϕ.\displaystyle S_{k}>\sum_{i=0}^{\infty}\phi^{i}\chi^{2}_{df}(\alpha)=\frac{\chi^{2}_{df}(\alpha)}{1-\phi}. (56)

All values are in standard SI units: mm (meter) for lfl_{f}, lrl_{r}, xkx_{k}, and yky_{k}; radrad for δku\delta^{u}_{k}, δkd\delta^{d}_{k}, βku\beta^{u}_{k}, βkd\beta^{d}_{k}, and ψk\psi_{k}; m/sm/s for vkv_{k}; m/s2m/s^{2} for akua^{u}_{k} and akda^{d}_{k}.

Refer to caption
Figure 3: Estimation errors of constrained states and traces of the state error covariance.

5.2 Results

We show a comparison of the proposed algorithm (CARE) and the unified linear input and state estimator (ISE) introduced in yong2016unified . Figure 3 shows the estimation errors of the constrained states (xkx_{k} and yky_{k}) and the traces of the state error covariances, and Fig. 4 shows the unknown attack signals and their estimates and traces of the attack estimation error covariances. As expected, CARE produces smaller state estimation error and lower covariance. When the attack happens after k=100k=100, the estimates obtained by CARE are closer to the true values and have lower error covariances (cf. Table 1).

Refer to caption
Figure 4: Attack signal estimation and traces of error covariance of the attack signals.

The estimates are used to calculate the detector state SkS_{k} in (55). The statistical significance of the attack is tested using the CUSUM detector. The threshold is calculated by χdf2/(1ϕ)\chi^{2}_{df}/(1-\phi) in (56) with the significance level α=0.01\alpha=0.01 and the forgetting rate ϕ=0.15\phi=0.15. The detector states and the threshold are plotted in loglog-scale (Fig. 5). When the attack is present, CARE can detect the attack by producing high detector state values above the threshold, while the detector state values from ISE are oscillating around the threshold, suffering from a high false negative rate of 66.44%66.44\%.

Refer to caption
Figure 5: Attack detection.

6 Conclusion

In this paper, we presented a constrained attack-resilient estimation algorithm (CARE) of linear stochastic cyber-physical systems. The proposed algorithm produces minimum-variance unbiased estimates and utilizes physical constraints and operational limitations to improve estimation accuracy and detection performance via projection. In particular, CARE first provides minimum-variance unbiased estimates, and then these estimates are projected onto the constrained space. We formally proved that estimation errors and their covariances from CARE are less than those from unconstrained algorithms and showed that CARE had improved false negative rate in attack detection. Moreover, we proved that the estimation errors of the proposed estimation algorithm are practically exponentially stable. A simulation of attacks on a vehicle demonstrates the effectiveness of the proposed algorithm and reveals better attack-resilient properties compared to an existing algorithm.

Appendix A Gauss-Markov Theorem

Theorem 16 (Gauss-Markov Theorem sayed2003fundamentals )

Given the linear model 𝐲=𝐇𝐱+𝐯{\bm{y}}={\bm{H}}{\bm{x}}+{\bm{v}}, where 𝐯{\bm{v}} is a zero-mean random variable with positive-definite covariance matrix 𝐑v{\bm{R}}_{v} and 𝐇{\bm{H}} is full rank m×nm\times n matrix with mnm\geq n, the minimum-variance-unbiased linear estimator of 𝐱{\bm{x}} given 𝐲{\bm{y}} is 𝐱^=(𝐇𝐑v1𝐇)1𝐇𝐑v1𝐲\hat{{\bm{x}}}=({\bm{H}}^{*}{\bm{R}}_{v}^{-1}{\bm{H}})^{-1}{\bm{H}}^{*}{\bm{R}}_{v}^{-1}{\bm{y}}.

Acknowledgements

This work has been supported by the National Science Foundation (ECCS-1739732 and CMMI-1663460).

References

  • (1) R. Rajkumar, I. Lee, L. Sha, J. Stankovic, Cyber-physical systems: the next computing revolution, in: IEEE Design Automation Conference, 2010, pp. 731–736.
  • (2) A. A. Cárdenas, S. Amin, S. Sastry, Research challenges for the security of control systems., HotSec 5 (2008) 15.
  • (3) R. M. Lee, M. J. Assante, T. Conway, German steel mill cyber attack, Industrial Control Systems 30 (2014) 22.
  • (4) J. Slay, M. Miller, Lessons learned from the Maroochy water breach, in: Springer International Conference on Eritical Infrastructure Protection, 2007, pp. 73–82.
  • (5) S. Peterson, P. Faramarzi, Iran hijacked us drone, says iranian engineer, Christian Science Monitor 15 (2011).
  • (6) K. Koscher, A. Czeskis, F. Roesner, S. Patel, T. Kohno, S. Checkoway, D. McCoy, B. Kantor, D. Anderson, H. Shacham, et al., Experimental security analysis of a modern automobile, in: IEEE Symposium on Security and Privacy, 2010, pp. 447–462.
  • (7) S. Checkoway, D. McCoy, B. Kantor, D. Anderson, H. Shacham, et al., Comprehensive experimental analyses of automotive attack surfaces, in: USENIX Security Symposium, Vol. 4, 2011, pp. 447–462.
  • (8) J. Raiyn, A survey of cyber attack detection strategies, International Journal of Security and Its Applications 8 (1) (2014) 247–256.
  • (9) A. A. Cardenas, S. Amin, S. Sastry, Secure control: Towards survivable cyber-physical systems, in: 28th International Conference on Distributed Computing Systems Workshops, 2008, pp. 495–500.
  • (10) F. Pasqualetti, F. Dörfler, F. Bullo, Attack detection and identification in cyber-physical systems, IEEE Transactions on Automatic Control 58 (11) (2013) 2715–2729.
  • (11) A. Teixeira, D. Pérez, H. Sandberg, K. H. Johansson, Attack models and scenarios for networked control systems, in: Proceedings of the 1st international conference on High Confidence Networked Systems, 2012, pp. 55–64.
  • (12) A. Teixeira, I. Shames, H. Sandberg, K. H. Johansson, A secure control framework for resource-limited adversaries, Automatica 51 (2015) 135–148.
  • (13) M. Pajic, J. Weimer, N. Bezzo, P. Tabuada, O. Sokolsky, I. Lee, G. J. Pappas, Robustness of attack-resilient state estimators, in: ACM/IEEE International Conference on Cyber-Physical Systems, 2014, pp. 163–174.
  • (14) H. Fawzi, P. Tabuada, S. Diggavi, Secure estimation and control for cyber-physical systems under adversarial attacks, IEEE Transactions on Automatic Control 59 (6) (2014) 1454–1467.
  • (15) M. Pajic, I. Lee, G. J. Pappas, Attack-resilient state estimation for noisy dynamical systems, IEEE Transactions on Control of Network Systems 4 (1) (2017) 82–92.
  • (16) M. Naghnaeian, N. H. Hirzallah, P. G. Voulgaris, Security via multirate control in cyber–physical systems, Systems & Control Letters 124 (2019) 12–18.
  • (17) N. H. Hirzallah, P. G. Voulgaris, N. Hovakimyan, On the estimation of signal attacks: A dual rate SD control framework, in: IEEE European Control Conference (ECC), 2019, pp. 4380–4385.
  • (18) Y. Liu, P. Ning, M. K. Reiter, False data injection attacks against state estimation in electric power grids, ACM Transactions on Information and System Security 14 (1) (2011) 21–32.
  • (19) Y. Mo, B. Sinopoli, Secure control against replay attacks, in: 47th Annual Allerton Conference on Communication, Control, and Computing, 2009, pp. 911–918.
  • (20) Y. Mo, R. Chabukswar, B. Sinopoli, Detecting integrity attacks on SCADA systems, IEEE Transactions on Control Systems Technology 22 (4) (2014) 1396–1407.
  • (21) S. Z. Yong, M. Zhu, E. Frazzoli, Resilient state estimation against switching attacks on stochastic cyber-physical systems, in: IEEE Conference on Decision and Control (CDC), 2015, pp. 5162–5169.
  • (22) N. Forti, G. Battistelli, L. Chisci, B. Sinopoli, A Bayesian approach to joint attack detection and resilient state estimation, in: IEEE Conference on Decision and Control (CDC), 2016, pp. 1192–1198.
  • (23) H. Kim, P. Guo, M. Zhu, P. Liu, Simultaneous input and state estimation for stochastic nonlinear systems with additive unknown inputs, Automatica 111 (2020) 108588.
  • (24) A. Teixeira, S. Amin, H. Sandberg, K. H. Johansson, S. S. Sastry, Cyber security analysis of state estimators in electric power systems, in: IEEE Conference on Decision and Control (CDC), 2010, pp. 5991–5998.
  • (25) D. Simon, T. L. Chia, Kalman filtering with state equality constraints, IEEE Transactions on Aerospace and Electronic Systems 38 (1) (2002) 128–136.
  • (26) S. Ko, R. R. Bitmead, State estimation for linear systems with state equality constraints, Automatica 43 (8) (2007) 1363–1368.
  • (27) D. Simon, Kalman filtering with state constraints: A survey of linear and nonlinear algorithms, IET Control Theory & Applications 4 (8) (2010) 1303–1318.
  • (28) H. Kong, M. Shan, S. Sukkarieh, T. Chen, W. X. Zheng, Kalman filtering under unknown inputs and norm constraints, Automatica 133 (2021) 109871.
  • (29) A. I. Mourikis, S. I. Roumeliotis, A multi-state constraint Kalman filter for vision-aided inertial navigation, in: IEEE International Conference on Robotics and Automation (ICRA), 2007, pp. 3565–3572.
  • (30) L. Wang, Y. Chiang, F. Chang, Filtering method for nonlinear systems with constraints, IEEE Proceedings-Control Theory and Applications 149 (6) (2002) 525–531.
  • (31) S. Z. Yong, M. Zhu, E. Frazzoli, Simultaneous input and state estimation of linear discrete-time stochastic systems with input aggregate information, in: IEEE Conference on Decision and Control (CDC), 2015, pp. 461–467.
  • (32) S. H. Kafash, J. Giraldo, C. Murguia, A. A. Cardenas, J. Ruths, Constraining attacker capabilities through actuator saturation, in: IEEE American Control Conference (ACC), 2018, pp. 986–991.
  • (33) S. M. Djouadi, A. M. Melin, E. M. Ferragut, J. A. Laska, J. Dong, A. Drira, Finite energy and bounded actuator attacks on cyber-physical systems, in: IEEE European Control Conference (ECC), 2015, pp. 3659–3664.
  • (34) S. Z. Yong, M. Zhu, E. Frazzoli, A unified filter for simultaneous input and state estimation of linear discrete-time stochastic systems, Automatica 63 (2016) 321–329.
  • (35) H. Kim, P. Guo, M. Zhu, P. Liu, Attack-resilient estimation of switched nonlinear cyber-physical systems, in: IEEE American Control Conference (ACC), 2017, pp. 4328–4333.
  • (36) P. Guo, H. Kim, N. Virani, J. Xu, M. Zhu, P. Liu, Roboads: Anomaly detection against sensor and actuator misbehaviors in mobile robots, in: 48th Annual IEEE/IFIP international conference on dependable systems and networks (DSN), 2018, pp. 574–585.
  • (37) B. O. Teixeira, J. Chandrasekar, L. A. Tôrres, L. A. Aguirre, D. S. Bernstein, State estimation for linear and non-linear equality-constrained systems, International Journal of Control 82 (5) (2009) 918–936.
  • (38) B. D. O. Anderson, J. B. Moore, Detectability and stabilizability of time-varying discrete-time linear-systems, SIAM Journal on Control and Optimization 19 (1) (1981) 20–32.
  • (39) S. Kluge, K. Reif, M. Brokate, Stochastic stability of the extended Kalman filter with intermittent observations, IEEE Transactions on Automatic Control 55 (2) (2010) 514–518.
  • (40) A. Papoulis, S. U. Pillai, Probability, random variables, and stochastic processes, Tata McGraw-Hill Education, 2002.
  • (41) D. J. Tylavsky, G. R. Sohie, Generalization of the matrix inversion lemma, Proceedings of the IEEE 74 (7) (1986) 1050–1052.
  • (42) R. Rajamani, Vehicle dynamics and control, Springer Science & Business Media, 2011.
  • (43) C. K. Law, D. Dalal, S. Shearrow, Robust model predictive control for autonomous vehicles/self driving cars, arXiv preprint arXiv:1805.08551 (2018).
  • (44) T. L. Lai, Sequential changepoint detection in quality control and dynamical systems, Journal of the Royal Statistical Society. Series B (Methodological) (1995) 613–658.
  • (45) A. H. Sayed, Fundamentals of adaptive filtering, John Wiley & Sons, 2003.