This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Distributed Estimation for Interconnected Systems with Arbitrary Coupling Structures

Yuchen Zhang, Bo Chen, Li Yu, and Daniel W.C. Ho Y. Zhang, B. Chen and L. Yu are with the Department of Automation, Zhejiang University of Technology, Hangzhou 310023, China.(email: YuchenZhang95@163.com, bchen@aliyun.com, lyu@zjut.edu.cn). D. W. C. Ho is with the Department of Mathematics, City University of Hong Kong, Hong Kong, 999077. (email: madaniel@cityu.edu.hk).
Abstract

This paper is concerned with the problem of distributed estimation for time-varying interconnected dynamic systems with arbitrary coupling structures. To guarantee the robustness of the designed estimators, novel distributed stability conditions are proposed with only local information and the information from neighbors. Then, simplified stability conditions which do not require timely exchange of neighbors’ estimator gain information is further developed for systems with delayed communication. By merging these subsystem-level stability conditions and the optimization-based estimator gain design, the distributed, stable and optimal estimators are proposed. Quite notably, these optimization solutions can be easily obtained by standard software packages, and it is also shown that the designed estimators are scalable in the sense of adding or subtracting subsystems. Finally, an illustrative example is employed to show the effectiveness of the proposed methods.

Index Terms:
Time-varying interconnected systems; Distributed stability conditions; Distributed estimation; Optimal estimators.

I Introduction

With the development of communication and sensor technology, the scale of systems is consistently increasing as they are getting more and more connected. As early as the 1960s, the concept of interconnected systems had been proposed [1], and interconnected systems have received more and more attention in recent decades due to their wide applications in power systems [2], multi-robot systems [3], complex networks [4, 5], and biological networks [6]. Generally, interconnected systems are high-dimensional complex systems composed of numerous dispersed subsystems, which can be state-coupled with their neighboring subsystems. The increased complexity of interconnected systems in terms of both system topologies and dynamics has prevented traditional estimation approaches from achieving satisfactory performance [7]. This can be mainly attributed to the poor scalability of centralized structure in traditional approaches. Firstly, the spatial distribution of subsystems will lead to high communication burden and field deployment cost for centralized methods. Meanwhile, the centralized methods also suffer heavy computational burden with the increase of the dimensions of interconnected systems. In addition, the intricate coupling structures of interconnected systems are not exploited in centralized methods, making it necessary to re-ensure stability when adding or subtracting subsystems. Therefore, it is imperative to consider advanced estimation approaches for interconnected systems to guarantee the accuracy and the stability of estimators.

Over the past several decades, different decentralized/distributed estimation approaches have been developed in the fields of multi-agent systems [8, 9], multi-sensor systems [10, 11, 12], and interconnected systems [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] to decrease communication overhead and computational complexity. In these approaches, local estimators are designed based on their own information and the information form their neighboring subsystems. However, the arbitrary couplings among subsystems impose more significant challenge to distributed analysis for interconnected systems, especially in terms of stability. For this reason, most of existing distributed estimation approaches for interconnected systems are based on special coupling structures or communication structures. With structural assumptions, the designed distributed estimators can provide better estimation performance and their stability can be ensured by local analysis. For example, the optimal locally unbiased filter was proposed in [13] with specific structure for information exchange, while the centralized and distributed moving horizon estimators were developed in [14] for sparse banded interconnected systems. The sparsity structure was also exploited to decompose interconnected systems into interconnected overlapping subsystems with coupled states that can be locally observed, then the distributed Kalman filter [15] and the consensus based decentralized estimator [16] were designed. Meanwhile, a sub-optimal distributed Kalman filtering problem was addressed in [17] for a class of sequentially interconnected systems. Note that it is difficult for most interconnected systems to transform into these structures. A hopeful idea to address distributed estimation problem without any structure constraints is to combine stability conditions and distributed estimator design methods. For instant, by adding constraints on stability conditions for general interconnected systems, distributed estimators with decoupling strategy were designed in [18, 19, 20] and a moving horizon estimator was proposed in [21] with the assumption of uncorrelated local estimation errors. Besides, the distributed estimators with plug-and-play fashion were developed in [22, 23] by exploiting the properties of infinity norm for small gain based stability conditions. However, how to design stable and distributed estimation methods based on local and neighboring information for general interconnected systems is still an open question.

Since the 1960s, the stability problem for general interconnected systems has received a great deal of attention [24, 25, 26]. To the best of our knowledge, besides the centralized analysis of stability for overall systems, the stability analysis methods for general interconnected systems can be divided into three categories: 1) methods based on scalar or vector Lyapunov function [1, 27, 28]; 2) methods based on small gain theorem [29, 30]; 3) methods based on dissipativity theory [31, 32]. For the first category, the stability conditions involving MM-matrices are derived by investigating the internal stability for both subsystems and the overall interconnected systems. Unfortunately, tests for MM-matrices are successful only when the couplings among subsystems are weak. In contrast, the stability conditions for the second category are obtained by analyzing the input-output stability of subsystems, where the couplings are treated as input terms from neighboring subsystems. It also requires weak coupling conditions for small gain theorem based methods, but the results are less conservative and can lead to relatively simple design guideline [24]. Another kind of input-output stability results in the third category are based on the concept of dissipativity, which are not necessarily weak coupling conditions due to their centralized analysis. However, the above stability conditions are not fully distributed, which means the knowledge of the dynamics and the couplings from neighboring subsystems is not enough in the analysis process. How to develop these stability conditions into scalable distributed conditions is still challenging. One way to address this problem is to derive the distributed stability conditions by totally local analysis [18, 19, 20, 21, 22, 23], where the stability of subsystems is locally and sequentially analyzed. Nevertheless, this approach is much more conservative than the centralized results, i.e., weak coupling conditions or structural assumptions are still required. Another promising idea is the subsystem-level analysis for centralized stability conditions by decomposing them into distributed ones. For example, the work in [33] focused on decomposing a centralized dissipativity condition into distributed dissipativity conditions of individual subsystems. Note that these conditions require huge communication burden to exchange message matrices among subsystems and cannot be generalized to time-varying interconnected systems.

It should be pointed out that the distributed estimator designed in [19] only provides stability conditions with specific subsystem coupling structures, which are further interpreted as the directed acyclic graphs of couplings in [20]. As for general subsystem connection structures, the design of distributed estimators with subsystem-level stability conditions is still challenging, and has not yet been fully solved. Motivated by the above analysis, we shall investigate the distributed estimation problem for general time-varying interconnected systems. The main contributions of this paper can be summarized as follows:

  • Distributed stability analysis. The distributed stability conditions, which only require subsystem-level knowledge of dynamics and couplings, are proposed for local estimators. Then, the effect of couplings on distributed conditions is discussed.

  • Distributed stability under delayed communication. The simplified distributed stability conditions are proposed for time-varying interconnected systems with one-step communication delay. It is shown that the simplified conditions do not need real-time exchange of subsystems’ gain information and can ease communication burden.

  • Distributed estimators design. By combining the distributed stability conditions and the optimization-based estimator gain design, a recursive, stable and optimal estimators for time-varying noisy interconnected systems are proposed, where an upper bound of local estimation error covariance is minimized. The proposed estimators are fully distributed, that is, only based on local and neighboring information.

Notations: Define l:={1,2,,l}\mathbb{N}_{l}:=\{1,2,...,l\}, where ll is a natural number excluding zero, and denote the set of nn-dimensional real vectors by n\mathbb{R}^{n}. Give sets AA and BB, ABA\setminus B represents the set of all elements of A that are not in BB, and ABA\cap B is the intersection set of AA and BB. The superscript ‘T\mathrm{T}’ represents the transpose, while the symmetric terms in a symmetric matrix are denoted by ‘*’. The inverse of the matrix AA is denoted by A1A^{-1}, and Tr(A)\mathrm{Tr}(A) represents the trace of the matrix AA. The identity matrix with appropriate dimensions is represented as ‘II’, and the matrix with all zero elements is denoted by ‘𝟎\mathbf{0}’. The notation X>(<)0X>(<)0 is a positive definite (negative definite) matrix, and X()0X\geq(\leq)0 is a positive semi-definite (negative semi-definite) matrix. The notation col{a1,,an}\mathrm{col}\{a_{1},...,a_{n}\} means a column vector whose elements are a1,,ana_{1},...,a_{n}, while diag{}\mathrm{diag}\{\cdot\} stands for a block diagonal matrix. The mathematical expectation is denoted by E{}\mathrm{E}\{\cdot\}, and A2\|A\|_{2} is the 2-norm of matrix AA. Given a block matrix A=[Ai,j]in,jmA=[A_{i,j}]_{i\in\mathbb{N}_{n},j\in\mathbb{N}_{m}}, Ai,jA_{i,j} represents the (i,j)(i,j)th block. The maximum eigenvalue of matrix AA is represented as λmax(A)\lambda_{\mathrm{max}}(A).

II Problem Formulation

II-A Time-varying Interconnected System Model

Consider a time-varying interconnected system 𝐒\mathbf{S} constructed by ll subsystems, where the state and measurement dynamics of the iith subsystem 𝐒i,il\mathbf{S}_{i},\ i\in\mathbb{N}_{l} is described as follows:

𝐒i:{xi(k+1)=Ai(k)xi(k)+Γi(k)wi(k)+iκρΩiAi,iκρ(k)xiκρ(k)yi(k)=Ci(k)xi(k)+Di(k)vi(k)il\mathbf{S}_{i}:\begin{cases}&\!\!\!\!\!\!\begin{aligned} x_{i}(k+1)=&A_{i}(k)x_{i}(k)+\Gamma_{i}(k)w_{i}(k)\\ &+\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}A_{i,i_{\kappa}^{\rho}}(k)x_{i_{\kappa}^{\rho}}(k)\end{aligned}\\ &\!\!\!\!\!\!y_{i}(k)=C_{i}(k)x_{i}(k)+D_{i}(k)v_{i}(k)\end{cases}\ \ \ \ i\in\mathbb{N}_{l} (1)

The vectors xi(k)nix_{i}(k)\in\mathbb{R}^{n_{i}} and yi(k)miy_{i}(k)\in\mathbb{R}^{m_{i}} denote the state and the measurement of the subsystem 𝐒i\mathbf{S}_{i}, respectively. Moreover, Ai(k)A_{i}(k), Γi(k)\Gamma_{i}(k), Ai,iκρ(k)A_{i,i_{\kappa}^{\rho}}(k), Ci(k)C_{i}(k) and Di(k)D_{i}(k) are bounded matrices with appropriate dimensions, while the system noise wi(k)w_{i}(k) and the measurement noise vi(k)v_{i}(k) are uncorrelated Gaussian white noises satisfying

{E[wi(k)wj(k1)]=δi,jδk,k1QwiE[vi(k)vj(k1)]=δi,jδk,k1QviE[wi(k)vj(k1)]=0(i,j,k,k1)\begin{cases}&\!\!\!\!\!\mathrm{E}\left[w_{i}(k)w_{j}(k_{1})\right]=\delta_{i,j}\delta_{k,k_{1}}Q_{w_{i}}\\ &\!\!\!\!\!\mathrm{E}\left[v_{i}(k)v_{j}(k_{1})\right]=\delta_{i,j}\delta_{k,k_{1}}Q_{v_{i}}\\ &\!\!\!\!\!\mathrm{E}\left[w_{i}(k)v_{j}(k_{1})\right]=0(\forall i,j,k,k_{1})\end{cases} (2)

where QwiQ_{w_{i}} and QviQ_{v_{i}} are the known covariances of wi(k)w_{i}(k) and vi(k)v_{i}(k), respectively. δk,k1=0\delta_{k,k_{1}}=0 if kk1k\neq k_{1} and δk,k1=1\delta_{k,k_{1}}=1 otherwise. The set of neighbors for subsystem 𝐒i\mathbf{S}_{i} is denoted by Ωi\Omega_{i}, and the number of elements is θi(θi<l)\theta_{i}\ (\theta_{i}<l). Therefore, the set Ωi\Omega_{i} can be described as

Ωi={i1ρ,,iκρ,,iθiρ}\Omega_{i}=\{i_{1}^{\rho},...,i_{\kappa}^{\rho},...,i_{\theta_{i}}^{\rho}\} (3)

The coupling structure of the system is determined by whether the matrix Ai,iκρ(k)A_{i,i_{\kappa}^{\rho}}(k) is a null matrix. Since there is no constraints on the spatial distribution of subsystems, the coupling structure can be arbitrary. Then, the following subset Σi\Sigma_{i} is defined:

Σi:={iκσiκσΩii}\Sigma_{i}:=\{i_{\kappa}^{\sigma}\mid i_{\kappa}^{\sigma}\in\Omega_{i}\setminus\mathbb{N}_{i}\} (4)

where the number of elements for Σi\Sigma_{i} is ξi(ξiθi)\xi_{i}\ (\xi_{i}\leq\theta_{i}).

Remark 1. Compared with the work in [13, 14, 15, 16, 17], the addressed interconnected system model in this paper does not require any structural assumptions (i.e., the sparsity assumption on couplings). In this case, the model in (1) is more general and can cover a large part of practical situations. For example, the heavy duty vehicle systems [34] with aerodynamic interconnections can be modeled as interconnected systems with strongly connected topologies in the form of (1). On the other hand, there is no constraint on the coupling strength for the interconnected system model in this paper, which is different from the work in [22, 23]. In other words, the upper bound of Ai,j(k)2\|A_{i,j}(k)\|_{2} can be arbitrarily large. However, the analysis of distributed stability and the distributed estimation problem for general interconnected system without any weak coupling assumptions will be more challenging.

To collaboratively achieve system tasks, subsystems need to exchange their information via communication networks. Therefore, the distributed communication structure in the following assumption is required.

Assumption 1 (Communication). Each subsystem can communicate with its neighbors.

Remark 2. Notice that the communication structure in Assumption 1 is distributed and has a limited range of information broadcast due to the limitation on network bandwidth and energy constraints for subsystems. Unlike the centralized communication structure with one subsystem communicates with all the other subsystems, the considered distributed communication structure is more practical. On the other hand, we restrict ourselves to the time-varying interconnected systems with constantly varying dynamics and couplings due to its wider applications. Take blocked power systems as an example, the couplings among different blocks are changing with the real-time power dispatching [35]. For time-varying interconnected systems, the distributed stability conditions in [33] are not suitable anymore, and thus novel distributed stability analysis approaches are required.

II-B Problem of Interest

Refer to caption
Figure 1: An example of the structure for interconnected systems and distributed estimators.

The structure of distributed estimators for interconnected systems with local information flows is depicted in Fig. 1. It is assumed that subsystems can only know their own dynamics, and thus the local measurements and the estimates form neighbors are used for state reconstruction. The estimator 𝐄i\mathbf{E}_{i} for the iith subsystem is proposed as

𝐄i:{x^ip(k)=Ai(k1)x^i(k1)+iκρΩiAi,iκρ(k1)x^iκρ(k1)x^i(k)=x^ip(k)+Ki(k)[yi(k)Ci(k)x^ip(k)]\mathbf{E}_{i}:\begin{cases}&\!\!\!\!\!\!\hat{x}^{p}_{i}(k)=A_{i}(k-1)\hat{x}_{i}(k-1)\\ &\ \ \ \ \ \ \ \ +\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}A_{i,i_{\kappa}^{\rho}}(k-1)\hat{x}_{i_{\kappa}^{\rho}}(k-1)\\ &\!\!\!\!\!\!\hat{x}_{i}(k)=\hat{x}^{p}_{i}(k)+K_{i}(k)\left[y_{i}(k)-C_{i}(k)\hat{x}^{p}_{i}(k)\right]\end{cases} (5)

where x^ip(k)\hat{x}_{i}^{p}(k) and x^i(k)\hat{x}_{i}(k) are the one-step prediction and the estimate of subsystem state xi(k)x_{i}(k), respectively. Then, the estimation error iteration for the iith subsystem is calculated by (1) and (5) as

{x~ip(k)=Ai(k1)x~i(k1)+Γi(k1)wi(k1)+iκρΩiAi,iκρ(k1)x~iκρ(k1)x~i(k)=KCi(k)x~ip(k)Ki(k)Di(k)vi(k)\begin{cases}&\!\!\!\!\!\!\tilde{x}_{i}^{p}(k)=A_{i}(k-1)\tilde{x}_{i}(k-1)+\Gamma_{i}(k-1)w_{i}(k-1)\\ &\ \ \ \ \ \ \ \ +\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}A_{i,i_{\kappa}^{\rho}}(k-1)\tilde{x}_{i_{\kappa}^{\rho}}(k-1)\\ &\!\!\!\!\!\!\tilde{x}_{i}(k)=K_{C_{i}}(k)\tilde{x}_{i}^{p}(k)-K_{i}(k)D_{i}(k)v_{i}(k)\end{cases} (6)

where KCi(k):=IKi(k)Ci(k)K_{C_{i}}(k):=I-K_{i}(k)C_{i}(k), while x~ip(k)\tilde{x}_{i}^{p}(k) and x~i(k)\tilde{x}_{i}(k) are the one-step prediction error and the estimation error, respectively. The one-step prediction error covariance Pip(k):=E{x~ip(k)[x~ip(k)]T}P_{i}^{p}(k):=\mathrm{E}\left\{\tilde{x}_{i}^{p}(k)\left[\tilde{x}_{i}^{p}(k)\right]^{\mathrm{T}}\right\} and the estimation error covariance Pi(k):=E{x~i(k)x~iT(k)}P_{i}(k):=\mathrm{E}\left\{\tilde{x}_{i}(k)\tilde{x}_{i}^{\mathrm{T}}(k)\right\} can be calculated as

{Pip(k)=Ai(k1)Pi(k1)AiT(k1)+Γi(k1)QwiΓiT(k1)+iκρΩiAi(k1)Pi,iκρ(k1)Ai,iκρT(k1)+iκρΩiAi,iκρ(k1)Piκρ,i(k1)AiT(k1)+iκ1ρΩiiκ2ρΩi{Ai,iκ1ρ(k1)Piκ1ρ,iκ2ρ(k1)Ai,iκ2ρT(k1)}Pi(k)=KCi(k)Pip(k)[KCi(k)]T+Ki(k)Di(k)QviDiT(k)KiT(k)\begin{cases}&\!\!\!\!\!\!\begin{aligned} &P_{i}^{p}(k)=A_{i}(k-1)P_{i}(k-1)A_{i}^{\mathrm{T}}(k-1)\\ &\ \ +\Gamma_{i}(k-1)Q_{w_{i}}\Gamma_{i}^{\mathrm{T}}(k-1)\\ &\ \ +\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}A_{i}(k-1)P_{i,i_{\kappa}^{\rho}}(k-1)A_{i,i_{\kappa}^{\rho}}^{\mathrm{T}}(k-1)\\ &\ \ +\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}A_{i,i_{\kappa}^{\rho}}(k-1)P_{i_{\kappa}^{\rho},i}(k-1)A_{i}^{\mathrm{T}}(k-1)\\ &\ \ +\sum_{i_{\kappa_{1}}^{\rho}\in\Omega_{i}}\sum_{i_{\kappa_{2}}^{\rho}\in\Omega_{i}}\left\{A_{i,i_{\kappa_{1}}^{\rho}}(k-1)\right.\\ &\left.\ \ \ \ \ \ P_{i_{\kappa_{1}}^{\rho},i_{\kappa_{2}}^{\rho}}(k-1)A_{i,i_{\kappa_{2}}^{\rho}}^{\mathrm{T}}(k-1)\right\}\end{aligned}\\ &\!\!\!\!\!\!\begin{aligned} P_{i}(k)=&K_{C_{i}}(k)P_{i}^{p}(k)\left[K_{C_{i}}(k)\right]^{\mathrm{T}}\\ &+K_{i}(k)D_{i}(k)Q_{v_{i}}D_{i}^{\mathrm{T}}(k)K_{i}^{\mathrm{T}}(k)\end{aligned}\end{cases} (7)

The major concern of the distributed estimation problem is to design suitable gain matrices Ki(k)(il)K_{i}(k)\ (i\in\mathbb{N}_{l}) such that the estimation error is stable and the estimation performance index Ji(k)J_{i}(k) is minimized. Specifically, the following definition is introduced to describe the property of stability for local estimators.

Definition 1 (Mean-square uniformly bounded). For the interconnected system in (1), the proposed estimator (5) is mean-square uniformly bounded if for arbitrarily large δpi0\delta_{p_{i}^{0}}, there is δpi(δpi0)>0\delta_{p_{i}}(\delta_{p_{i}^{0}})>0 (independent of k0k_{0}) such that

Pi(k0)2δpi0Pi(k)2δpi\|P_{i}(k_{0})\|_{2}\leq\delta_{p_{i}^{0}}\Rightarrow\|P_{i}(k)\|_{2}\leq\delta_{p_{i}} (8)

However, it is usually difficult for subsystems to timely obtain the cross-covariances Pi,j(k):=E{x~i(k)x~jT(k)}P_{i,j}(k):=\mathrm{E}\{\tilde{x}_{i}(k)\tilde{x}^{\mathrm{T}}_{j}(k)\} by only local communication. Therefore, an upper bound of the estimation error covariance P^i(k)Pi(k)\hat{P}_{i}(k)\geq P_{i}(k) is used instead and the performance index for local estimation is designed as Ji(k)=Tr{P^i(k)}J_{i}(k)=\mathrm{Tr}\{\hat{P}_{i}(k)\}. Here, the optimal estimator gain design for subsystems can be formulated as an optimization problem:

minKi(k)Tr{P^i(k)}\displaystyle\min_{K_{i}(k)}\mathrm{Tr}\{\hat{P}_{i}(k)\} (9)
s.t.P^i(k)Pi(k)andKi(k)𝒦i(k)\displaystyle\mathrm{s.t.}\ \ \hat{P}_{i}(k)\geq P_{i}(k)\ \ \mathrm{and}\ \ K_{i}(k)\in\mathcal{K}_{i}(k)

where 𝒦i(k)\mathcal{K}_{i}(k) is a subspace of stable estimator gains for subsystem 𝐒i\mathbf{S}_{i} at the instant kk.

In what follows, the augmented system dynamics and the augmented estimator iteration will be presented. By defining x(k):=col{x1(k),,xl(k)}nx(k):=\mathrm{col}\{x_{1}(k),...,x_{l}(k)\}\in\mathbb{R}^{n}, we can obtain the overall system dynamics as

𝐒:{x(k+1)=A(k)x(k)+Γ(k)w(k)y(k)=C(k)x(k)+D(k)v(k)\mathbf{S}:\begin{cases}&\!\!\!\!\!\!x(k+1)=A(k)x(k)+\Gamma(k)w(k)\\ &\!\!\!\!\!\!y(k)=C(k)x(k)+D(k)v(k)\end{cases} (10)

where A(k):=[Ai,j(k)]i,jlA(k):=[A_{i,j}(k)]_{i,j\in\mathbb{N}_{l}} with Ai,i(k)=Ai(k)A_{i,i}(k)=A_{i}(k) and

{y(k):=col{y1(k),,yl(k)}Γ(k):=diag{Γ1(k),,Γl(k)}C(k):=diag{C1(k),,Cl(k)}D(k):=diag{D1(k),,Dl(k)}v(k):=col{v1(k),,vl(k)}w(k):=col{w1(k),,wl(k)}\begin{cases}&\!\!\!\!\!\!y(k):=\mathrm{col}\{y_{1}(k),...,y_{l}(k)\}\\ &\!\!\!\!\!\!\Gamma(k):=\mathrm{diag}\{\Gamma_{1}(k),...,\Gamma_{l}(k)\}\\ &\!\!\!\!\!\!C(k):=\mathrm{diag}\{C_{1}(k),...,C_{l}(k)\}\\ &\!\!\!\!\!\!D(k):=\mathrm{diag}\{D_{1}(k),...,D_{l}(k)\}\\ &\!\!\!\!\!\!v(k):=\mathrm{col}\{v_{1}(k),...,v_{l}(k)\}\\ &\!\!\!\!\!\!w(k):=\mathrm{col}\{w_{1}(k),...,w_{l}(k)\}\end{cases} (11)

The upper bounds of bounded matrices are A(k)2δa\|A(k)\|_{2}\leq\delta_{a}, Γ(k)2δγ\|\Gamma(k)\|_{2}\leq\delta_{\gamma}, C(k)2δc\|C(k)\|_{2}\leq\delta_{c}, D(k)2δd\|D(k)\|_{2}\leq\delta_{d}, Ai,j(k)2αi,j\|A_{i,j}(k)\|_{2}\leq\alpha_{i,j} and Ai(k)2αi\|A_{i}(k)\|_{2}\leq\alpha_{i}, respectively. Then, let us denote x^(k):=col{x^1(k),,x^l(k)}\hat{x}(k):=\mathrm{col}\{\hat{x}_{1}(k),...,\hat{x}_{l}(k)\} and x~(k):=col{x~1(k),,x~l(k)}\tilde{x}(k):=\mathrm{col}\{\tilde{x}_{1}(k),...,\tilde{x}_{l}(k)\}. The following augmented estimator and the augmented estimation error iteration are obtained:

{x^(k)=A(k1)x^(k1)+K(k)[y(k)C(k)A(k1)x^(k1)]x~(k)=KC(k)A(k1)x~(k1)K(k)D(k)v(k)+KC(k)Γ(k1)w(k1)\begin{cases}&\!\!\!\!\!\!\begin{aligned} \hat{x}(k)=&A(k-1)\hat{x}(k-1)\\ &+K(k)\left[y(k)-C(k)A(k-1)\hat{x}(k-1)\right]\end{aligned}\\ &\!\!\!\!\!\!\begin{aligned} \tilde{x}(k)=&K_{C}(k)A(k-1)\tilde{x}(k-1)-K(k)D(k)v(k)\\ &+K_{C}(k)\Gamma(k-1)w(k-1)\end{aligned}\end{cases} (12)

where K(k):=diag{K1(k),,Kl(k)}K(k):=\mathrm{diag}\{K_{1}(k),...,K_{l}(k)\} and KC(k):=IK(k)C(k)K_{C}(k):=I-K(k)C(k).

Note that the stability of each local estimator depends on the stability of its neighboring estimators due to the interconnected estimation error x~iκρ(k1)\tilde{x}_{i_{\kappa}^{\rho}}(k-1). Hence, it is difficult to determine 𝒦i(k)\mathcal{K}_{i}(k) and design a stable estimator in a totally local analysis. On the other hand, a totally centralized analysis for the augmented estimation error system needs the knowledge of dynamics and couplings from the overall system, which cannot apply to large-scale interconnected systems.

Consequently, the aim of this paper is to address the following problems:

  • 1)

    Distributed stability conditions analysis: Analyze the distributed stability conditions such that the proposed estimator is mean-square uniformly bounded, where only subsystem-level knowledge of dynamics and couplings is required for each estimator.

  • 2)

    Distributed estimator design: Design distributed, stable and optimal estimators for time-varying interconnected systems with arbitrary coupling structures, where an upper bound of local estimation error covariance is minimized.

Remark 3. To design a fully distributed estimator, both the iteration form and the stability conditions for the estimator need to achieve local communication, computation and storage. Though the estimator in (5) only uses the information of local measurement and neighboring estimates, the local estimation errors are still interconnected. Therefore, the major difficulty for the distributed estimator design for general interconnected systems is to calculate the optimal estimator gain and maintain the stability without any globally interconnection information of estimation errors.

III Main Results

In this section, we firstly present distributed conditions to guarantee the stability of local estimators. Then, a fully distributed estimation approach is proposed by merging optimal and stable estimator gain designs.

III-A Distributed Stability Conditions

Let us denote the augmented estimation error covariance as P(k):=E{x~(k)x~T(k)}P(k):=\mathrm{E}\{\tilde{x}(k)\tilde{x}^{\mathrm{T}}(k)\} and it is calculated by

P(k)=\displaystyle P(k)= KC(k)A(k1)P(k1)AT(k1)KCT(k)\displaystyle K_{C}(k)A(k-1)P(k-1)A^{\mathrm{T}}(k-1)K_{C}^{\mathrm{T}}(k) (13)
+KC(k)Γ(k1)QwΓT(k1)KCT(k)\displaystyle+K_{C}(k)\Gamma(k-1)Q_{w}\Gamma^{\mathrm{T}}(k-1)K_{C}^{\mathrm{T}}(k)
+K(k)D(k)QvDT(k)KT(k)\displaystyle+K(k)D(k)Q_{v}D^{\mathrm{T}}(k)K^{\mathrm{T}}(k)

where the matrices Qw:=diag{Qw1,,Qwl}Q_{w}:=\mathrm{diag}\{Q_{w_{1}},...,Q_{w_{l}}\} and Qv:=diag{Qv1,,Qvl}Q_{v}:=\mathrm{diag}\{Q_{v_{1}},...,Q_{v_{l}}\} are the augmented noise covariances. Then, the centralized stability conditions are derived by the following proposition. Its proof appears in the Appendix.

Proposition 1. If the following centralized stability condition is satisfied

{KC(k)A(k1)2λ<1K(k)2η(kk0)\begin{cases}&\!\!\!\!\!\!\|K_{C}(k)A(k-1)\|_{2}\leq\lambda<1\\ &\!\!\!\!\!\!\|K(k)\|_{2}\leq\eta\end{cases}\ \ (\forall k\geq k_{0}) (14)

where η\eta is a finite positive number, then the proposed distributed estimator (5) is stable in the sense of mean-square uniformly bounded (8).

Under the distributed communication structure, the knowledge of dynamics and couplings from the overall system is hard to obtain for local estimators. Therefore, the following theorem provides distributed conditions to ensure the stability for all local estimators.

Theorem 1 (Distributed stability conditions). The following distributed conditions are sufficient to ensure the stability for the proposed distributed estimator (5):

  • C1)

    For each subsystem

    {KCi(k)Ai(k1)2λ<1Ki(k)2η(il)\begin{cases}&\!\!\!\!\!\!\|K_{C_{i}}(k)A_{i}(k-1)\|_{2}\leq\lambda<1\\ &\!\!\!\!\!\!\|K_{i}(k)\|_{2}\leq\eta\end{cases}\ \ (i\in\mathbb{N}_{l}) (15)
  • C2)

    For each pair of neighbors (i,iκρ)(i,i_{\kappa}^{\rho})

    ϵi,iκρ(k)ϵiκρ,i(k)Ni(k)Ni,iκρ(k)Niκρ1(k)Ni,iκρT(k)0\!\!\epsilon_{i,i_{\kappa}^{\rho}}(k)\epsilon_{i_{\kappa}^{\rho},i}(k)N_{i}(k)\!-\!N_{i,i_{\kappa}^{\rho}}(k)N_{i_{\kappa}^{\rho}}^{-1}(k)N_{i,i_{\kappa}^{\rho}}^{\mathrm{T}}(k)\leq 0 (16)

where η\eta is a finite positive number, while parameter ϵi,j(k)\epsilon_{i,j}(k) satisfies iκρΩiϵi,iκρ(k)=1\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}\epsilon_{i,i_{\kappa}^{\rho}}(k)=1 and

{Ni(k):=[λIKCi(k)Ai(k1)λI]Ni,iκρ(k):=[0KCi(k)Ai,iκρ(k1)Aiκρ,iT(k1)KCiκρT(k)0]\displaystyle\begin{cases}&\!\!\!\!\!\!N_{i}(k):=\begin{bmatrix}-\lambda I&K_{C_{i}}(k)A_{i}(k-1)\\ *&-\lambda I\end{bmatrix}\\ &\!\!\!\!\!\!N_{i,i_{\kappa}^{\rho}}(k):=\begin{bmatrix}0&\!\!\!K_{C_{i}}(k)A_{i,i_{\kappa}^{\rho}}(k\!-\!1)\\ A^{\mathrm{T}}_{i_{\kappa}^{\rho},i}(k\!-\!1)K^{\mathrm{T}}_{C_{i_{\kappa}^{\rho}}}(k)&0\end{bmatrix}\end{cases} (17)

These conditions are equivalent to the following inequalities:

{Mi,iκσ(k)0(il,iκσΣi)Ki(k)2η(il)\begin{cases}&\!\!\!\!\!\!M_{i,i_{\kappa}^{\sigma}}(k)\leq 0\ \ (i\in\mathbb{N}_{l},\ i_{\kappa}^{\sigma}\in\Sigma_{i})\\ &\!\!\!\!\!\!\|K_{i}(k)\|_{2}\leq\eta\ \ (i\in\mathbb{N}_{l})\end{cases} (18)

where

Mi,iκσ(k)=Δ[ϵi,iκσ(k)Ni(k)Ni,iκσ(k)ϵiκσ,i(k)Niκσ(k)]M_{i,i_{\kappa}^{\sigma}}(k)\buildrel\Delta\over{=}\begin{bmatrix}\epsilon_{i,i_{\kappa}^{\sigma}}(k)N_{i}(k)&N_{i,i_{\kappa}^{\sigma}}(k)\\ *&\epsilon_{i_{\kappa}^{\sigma},i}(k)N_{i_{\kappa}^{\sigma}}(k)\end{bmatrix} (19)

Proof. According to Schur complement lemma [36], the condition (C2) and the first inequality in the condition (C1) are equivalent to Mi,iκρ(k)0M_{i,i_{\kappa}^{\rho}}(k)\leq 0 or Miκρ,i(k)0M_{i_{\kappa}^{\rho},i}(k)\leq 0 for each pair of neighbors (i,iκρ)(i,i_{\kappa}^{\rho}). By the definition of Σi\Sigma_{i} in (4), one can conclude that Mi,iκσ(k)0(il,iκσΣi)M_{i,i_{\kappa}^{\sigma}}(k)\leq 0\ \ (i\in\mathbb{N}_{l},\ i_{\kappa}^{\sigma}\in\Sigma_{i}). Then, define the permutation matrix

Q:=[Ini00000Ini00Iniκσ00000Iniκσ]Q:=\begin{bmatrix}I_{n_{i}}&0&0&0\\ 0&0&I_{n_{i}}&0\\ 0&I_{n_{i_{\kappa}^{\sigma}}}&0&0\\ 0&0&0&I_{n_{i_{\kappa}^{\sigma}}}\end{bmatrix} (20)

By left and right multiplication of Mi,iκσ(k)M_{i,i_{\kappa}^{\sigma}}(k) with QQ and QTQ^{\mathrm{T}}, the following equivalent inequality is derived:

M^i,iκσ(k)=[Ui,iκσ(k)Vi,iκσ(k)Ui,iκσ(k)]0(il,iκσΣi)\displaystyle\hat{M}_{i,i_{\kappa}^{\sigma}}(k)\!=\!\begin{bmatrix}U_{i,i_{\kappa}^{\sigma}}(k)&V_{i,i_{\kappa}^{\sigma}}(k)\\ *&U_{i,i_{\kappa}^{\sigma}}(k)\end{bmatrix}\leq 0\ \ (i\in\mathbb{N}_{l},i_{\kappa}^{\sigma}\in\Sigma_{i}) (21)

where

{Ui,iκσ(k):=[ϵi,iκσ(k)λI0ϵiκσ,i(k)λI]Vi,iκσ(k):=[ϵi,iκσ(k)KCi(k)Ai(k1)KCi(k)Ai,iκσ(k1)KCiκσ(k)Aiκσ,i(k1)ϵiκσ,i(k)KCiκσ(k)Aiκσ(k1)]\displaystyle\begin{cases}&\!\!\!\!\!\!U_{i,i_{\kappa}^{\sigma}}(k):=\begin{bmatrix}-\epsilon_{i,i_{\kappa}^{\sigma}}(k)\lambda I&0\\ *&-\epsilon_{i_{\kappa}^{\sigma},i}(k)\lambda I\end{bmatrix}\\ &\!\!\!\!\!\!\begin{aligned} &V_{i,i_{\kappa}^{\sigma}}(k):=\\ &\!\!\begin{bmatrix}\epsilon_{i,i_{\kappa}^{\sigma}}(k)K_{C_{i}}(k)A_{i}(k\!-\!1)&K_{C_{i}}\!(k)A_{i,i_{\kappa}^{\sigma}}(k\!-\!1)\\ \!\!\!\!K_{C_{i_{\kappa}^{\sigma}}}\!(k)A_{i_{\kappa}^{\sigma},i}(k\!-\!1)&\!\!\!\!\epsilon_{i_{\kappa}^{\sigma},i}(k)K_{C_{i_{\kappa}^{\sigma}}}(k)A_{i_{\kappa}^{\sigma}}(k\!-\!1)\end{bmatrix}\end{aligned}\end{cases} (22)

By augmenting all the matrices in (21), one has that

M^(k)=diag{\displaystyle\hat{M}(k)=\mathrm{diag}\{ M^1,11σ(k),,M^1,1ξ1σ(k),\displaystyle\hat{M}_{1,1_{1}^{\sigma}}(k),...,\hat{M}_{1,1_{\xi_{1}}^{\sigma}}(k), (23)
M^2,21σ(k),,M^2,2ξ2σ(k),}0\displaystyle\hat{M}_{2,2_{1}^{\sigma}}(k),...,\hat{M}_{2,2_{\xi_{2}}^{\sigma}}(k),...\}\leq 0

Then, define the permutation matrix

R:=row{e1,11σ,,e1,1ξ1σ,e2,21σ,,e2,2ξ2σ,}R:=\mathrm{row}\{e_{1,1_{1}^{\sigma}},...,e_{1,1_{\xi_{1}}^{\sigma}},e_{2,2_{1}^{\sigma}},...,e_{2,2_{\xi_{2}}^{\sigma}},...\} (24)

where ei,j:=[eiej0000eiej]e_{i,j}:=\begin{bmatrix}e_{i}&e_{j}&0&0\\ 0&0&e_{i}&e_{j}\end{bmatrix} and eie_{i} is a matrix with dimension n×nin\times n_{i} that contains all zero elements, but an identity matrix of dimension nin_{i} at rows (j=1i1nj+1):(j=1inj)(\sum_{j=1}^{i-1}n_{j}+1):(\sum_{j=1}^{i}n_{j}). By the property of positive definite matrix, if M^(k)<0\hat{M}(k)<0, then the matrix by left and right multiplication of M^(k)\hat{M}(k) with RR and RTR^{\mathrm{T}} is negative definite, i.e.,

[λIKC1(k)A1(k1)KC1(k)A1,l(k1)KC2(k)A2,1(k1)KC2(k)A2,l(k1)KCl(k)Al,1(k1)KCl(k)Al(k1)λI]\displaystyle\begin{bmatrix}\begin{array}[]{c|c}-\lambda I&\begin{matrix}K_{C_{1}}(k)A_{1}(k-1)&\cdots&K_{C_{1}}(k)A_{1,l}(k-1)\\ K_{C_{2}}(k)A_{2,1}(k-1)&\cdots&K_{C_{2}}(k)A_{2,l}(k-1)\\ \vdots&\ddots&\vdots\\ K_{C_{l}}(k)A_{l,1}(k-1)&\cdots&K_{C_{l}}(k)A_{l}(k-1)\\ \end{matrix}\\ \hline\cr*&-\lambda I\end{array}\end{bmatrix} (25)
0\displaystyle\leq 0

Inequality (25) is equivalent to

[λIKC(k)A(k1)λI]0\begin{bmatrix}-\lambda I&K_{C}(k)A(k-1)\\ *&-\lambda I\end{bmatrix}\leq 0 (26)

By Schur complement lemma, one has that

KC(k)A(k1)2λ\|K_{C}(k)A(k-1)\|_{2}\leq\lambda (27)

According to Proposition 1, inequality (27) and the second inequality in the condition (C1) are sufficient to ensure the mean-square boundedness (8). This completes the proof.

Remark 4. Intuitively, the stability of an independent subsystem without any couplings is not influenced by its neighboring subsystems, and thus local mean-square uniform boundedness condition is enough to ensure the stability. However, the stability condition for the subsystem that coupled with its neighbors will be tighter than the local mean-square uniform boundedness condition. Therefore, the distributed stability conditions in Theorem 1 contain two parts, the condition (C1) ensures that local estimation error system without interconnected terms is stable, while the condition (C2) is an additional requirement for the stability of systems with some coupling relationships.

Remark 5. For the distributed control problem of interconnected systems with known system states, the distributed conditions for stabilizing systems can be obtained by a similar derivation of Theorem 1. When the process dynamics (1) is controlled by an additional input term “Bi(k)ui(k)B_{i}(k)u_{i}(k)”, the distributed state feedback controllers ui(k)=iκρΩiKi,iκρu(k)xi(k)u_{i}(k)=-\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}K^{u}_{i,i_{\kappa}^{\rho}}(k)x_{i}(k) and “ui(k)=Kiu(k)xi(k)u_{i}(k)=-K^{u}_{i}(k)x_{i}(k)” can be designed, then the distributed stabilization conditions can be obtained by decomposing the matrix inequality A(k)B(k)Ku(k)2λ\|A(k)-B(k)K^{u}(k)\|_{2}\leq\lambda with the property of positive definite matrix, where B(k)B(k) and Ku(k)K^{u}(k) are the corresponding augmented matrices.

The determination of the parameter λ\lambda is a trade-off between the stability and the performance of estimators, where smaller λ\lambda can provide more conservative margin of distributed stability and potentially worse estimation performance. On the other hand, the parameter η\eta only influences the ultimate boundedness of estimators and can be chosen as a large number to avoid estimation performance degradation.

Notice that the above distributed stability conditions need subsystem 𝐒i\mathbf{S}_{i} to know the gain Kiκρ(k)K_{i_{\kappa}^{\rho}}(k) from subsystem 𝐒iκρ\mathbf{S}_{i_{\kappa}^{\rho}} timely. However, one-step communication delay, naturally risen from networked environments, is inevitable and need to be taken into account when the estimator gain information is transmitted over the communication network from neighboring subsystems. To extend the result of Theorem 1 to more general communication environments with one-step transmission delay, the following conditions without synchronously knowing neighboring estimator gains are further proposed.

Corollary 1. For each subsystem, if the following inequalities are satisfied:

{KCi(k)2βiKi(k)2η(il)\begin{cases}&\!\!\!\!\!\!\|K_{C_{i}}(k)\|_{2}\leq\beta_{i}\\ &\!\!\!\!\!\!\|K_{i}(k)\|_{2}\leq\eta\end{cases}\ \ (i\in\mathbb{N}_{l}) (28)

where η\eta is a finite positive number, and βiλαi\beta_{i}\leq\frac{\lambda}{\alpha_{i}} is constrained by

(αiβiλ)(αiκρβiκρλ)βi2αi,iκρ2ϵ¯i,iκρϵ¯iκρ,iiκρΩi\left(\alpha_{i}\beta_{i}-\lambda\right)\left(\alpha_{i_{\kappa}^{\rho}}\beta_{i_{\kappa}^{\rho}}-\lambda\right)\geq\frac{\beta_{i}^{2}\alpha^{2}_{i,i_{\kappa}^{\rho}}}{\bar{\epsilon}_{i,i_{\kappa}^{\rho}}\bar{\epsilon}_{i_{\kappa}^{\rho},i}}\ \ i_{\kappa}^{\rho}\in\Omega_{i} (29)

with parameter ϵ¯i,j\bar{\epsilon}_{i,j} satisfies iκρΩiϵ¯i,iκρ=1\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}\bar{\epsilon}_{i,i_{\kappa}^{\rho}}=1, then the proposed distributed estimator is stable in the sense of mean-square uniformly bounded (8).

Proof. The following upper bounds can be derived from the inequality (28):

{KCi(k)Ai(k1)2βiαiλKCi(k)Ai,iκρ(k1)2βiαi,iκρKCiκρ(k)Aiκρ,i(k1)2βiαiκρ,i\begin{cases}&\!\!\!\!\!\!\|K_{C_{i}}(k)A_{i}(k-1)\|_{2}\leq\beta_{i}\alpha_{i}\leq\lambda\\ &\!\!\!\!\!\!\|K_{C_{i}}(k)A_{i,i_{\kappa}^{\rho}}(k-1)\|_{2}\leq\beta_{i}\alpha_{i,i_{\kappa}^{\rho}}\\ &\!\!\!\!\!\!\|K_{C_{i_{\kappa}^{\rho}}}(k)A_{i_{\kappa}^{\rho},i}(k-1)\|_{2}\leq\beta_{i}\alpha_{i_{\kappa}^{\rho},i}\end{cases} (30)

By the Schur complement lemma, the first inequality in (30) can be converted to

[βiαiIKCi(k)Ai(k1)βiαiI]0\begin{bmatrix}-\beta_{i}\alpha_{i}I&K_{C_{i}}(k)A_{i}(k-1)\\ *&-\beta_{i}\alpha_{i}I\end{bmatrix}\leq 0 (31)

Hence, the upper bounds of Ni(k)N_{i}(k) and Ni1(k)N_{i}^{-1}(k) can be obtained as

{Ni(k)(βiαiλ)INi1(k)1βiαiλI\begin{cases}&\!\!\!\!\!\!N_{i}(k)\leq\left(\beta_{i}\alpha_{i}-\lambda\right)I\\ &\!\!\!\!\!\!N^{-1}_{i}(k)\geq\frac{1}{\beta_{i}\alpha_{i}-\lambda}I\end{cases} (32)

By the second and the third inequalities of (30), one has the following inequality:

Ni,iκρ(k)Ni,iκρT(k)[βi2αi,iκρ2I00βiκρ2αiκρ,i2I]N_{i,i_{\kappa}^{\rho}}(k)N_{i,i_{\kappa}^{\rho}}^{\mathrm{T}}(k)\leq\begin{bmatrix}\beta_{i}^{2}\alpha^{2}_{i,i_{\kappa}^{\rho}}I&0\\ 0&\beta_{i_{\kappa}^{\rho}}^{2}\alpha^{2}_{i_{\kappa}^{\rho},i}I\end{bmatrix} (33)

Therefore, it can be concluded that

ϵ¯i,iκρϵ¯iκρ,iNi(k)Ni,iκρ(k)Niκρ1(k)Ni,iκρT(k)\displaystyle\bar{\epsilon}_{i,i_{\kappa}^{\rho}}\bar{\epsilon}_{i_{\kappa}^{\rho},i}N_{i}(k)-N_{i,i_{\kappa}^{\rho}}(k)N_{i_{\kappa}^{\rho}}^{-1}(k)N_{i,i_{\kappa}^{\rho}}^{\mathrm{T}}(k) (34)
ϵ¯i,iκρϵ¯iκρ,i(βiαiλ)I\displaystyle\leq\bar{\epsilon}_{i,i_{\kappa}^{\rho}}\bar{\epsilon}_{i_{\kappa}^{\rho},i}\left(\beta_{i}\alpha_{i}-\lambda\right)I
1αiκρβiκρλ[βi2αi,iκρ2I00βiκρ2αiκρ,i2I]\displaystyle\ \ \ \ -\frac{1}{\alpha_{i_{\kappa}^{\rho}}\beta_{i_{\kappa}^{\rho}}-\lambda}\begin{bmatrix}\beta_{i}^{2}\alpha^{2}_{i,i_{\kappa}^{\rho}}I&0\\ 0&\beta_{i_{\kappa}^{\rho}}^{2}\alpha^{2}_{i_{\kappa}^{\rho},i}I\end{bmatrix}

Under the constraints in (29) and taking ϵi,iκρ(k)=ϵ¯i,iκρ\epsilon_{i,i_{\kappa}^{\rho}}(k)=\bar{\epsilon}_{i,i_{\kappa}^{\rho}}, the condition (C2) in Theorem 1 is derived. This completes the proof.

To balance the stability margin of each subsystem, the parameter ϵi,j(k)\epsilon_{i,j}(k) and ϵ¯i,j\bar{\epsilon}_{i,j} should be proportional to the size of couplings, and a feasible parameter selection is given by

{ϵi,j(k)=Ai,j(k1)2+Aj,i(k1)2iκρΩi(Ai,iκρ(k1)2+Aiκρ,i(k1)2)ϵ¯i,j=αi,j+αj,iiκρΩi(αi,iκρ+αiκρ,i)\begin{cases}&\!\!\!\!\!\!\epsilon_{i,j}(k)=\frac{\|A_{i,j}(k-1)\|_{2}+\|A_{j,i}(k-1)\|_{2}}{\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}\left(\|A_{i,i_{\kappa}^{\rho}}(k-1)\|_{2}+\|A_{i_{\kappa}^{\rho},i}(k-1)\|_{2}\right)}\\ &\!\!\!\!\!\!\bar{\epsilon}_{i,j}=\frac{\alpha_{i,j}+\alpha_{j,i}}{\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}\left(\alpha_{i,i_{\kappa}^{\rho}}+\alpha_{i_{\kappa}^{\rho},i}\right)}\end{cases} (35)

Remark 6. By Corollary 1, the distributed stability conditions are simplified into finding an appropriate time-invariant parameter βi\beta_{i}. Unlike the conditions in [33] that require huge communication burden to exchange message matrices among subsystems, the calculation of βi\beta_{i} in this paper only needs subsystems to communicate with their neighbors to exchange the knowledge of βiκρ\beta_{i_{\kappa}^{\rho}} and αiκρ\alpha_{i_{\kappa}^{\rho}}. Therefore, this procedure can be achieved offline with less communication overhead. Notice that the stability result with less communication and computational burden is more suitable for time-varying interconnected systems with different couplings and dynamics at each instant.

The distributed calculation of βi\beta_{i} can be implemented in the following Algorithm.

Algorithm 1 Distributed calculation for βi\beta_{i}
1:  for i:=1i:=1 to LL do
2:     if i1i\neq 1 then
3:        Subsystem 𝐒i\mathbf{S}_{i} receives βiκρ\beta_{i_{\kappa}^{\rho}} and αiκρ\alpha_{i_{\kappa}^{\rho}} from subsystem 𝐒iκρ(iκρΩii1)\mathbf{S}_{i_{\kappa}^{\rho}}\ (i_{\kappa}^{\rho}\in\Omega_{i}\cap\mathbb{N}_{i-1});
4:     end if
5:     Subsystem 𝐒i\mathbf{S}_{i} calculates βi<λ\beta_{i}<\lambda that satisfies (29) for each coupling pair (i,iκρ),iκρΩii1(i,i_{\kappa}^{\rho}),i_{\kappa}^{\rho}\in\Omega_{i}\cap\mathbb{N}_{i-1};
6:     Subsystem 𝐒i\mathbf{S}_{i} sends the calculated βi\beta_{i} and αi\alpha_{i} to subsystem 𝐒iκσiκσΣi\mathbf{S}_{i_{\kappa}^{\sigma}}\ i_{\kappa}^{\sigma}\in\Sigma_{i};
7:  end for

Remark 7. The small gain theorem for interconnected systems can be stated as follows. Suppose that each local estimation error system (6) satisfies the local mean-square uniform boundedness condition KCi(k)Ai(k1)2<1\|K_{C_{i}}(k)A_{i}(k-1)\|_{2}<1, then the augmented estimation error system in (12) is stable if the set of small gain conditions Ai1,i2Ai2,i3Air,i1<1\|A_{i_{1},i_{2}}A_{i_{2},i_{3}}...A_{i_{r},i_{1}}\|<1 (1isl1\leq i_{s}\leq l, isisi_{s}\neq i_{s^{\prime}} if sss\neq s^{\prime}) holds for each r=2,,lr=2,...,l. The small gain conditions mean that the composition of the coupling matrices along every closed cycle is stable. However, it is hard to apply the small gain theorem to design distributed estimator or controller for interconnected systems with arbitrary coupling structures. For distributed estimation problem, feedback is introduced to adjust the size of KCi(k)Ai(k1)(il)K_{C_{i}}(k)A_{i}(k-1)\ (i\in\mathbb{N}_{l}) in a distributed manner such that the augmented estimation error system is stable, while the coupling matrices Ai,iκρ(k)A_{i,i_{\kappa}^{\rho}}(k) cannot be adjusted. The small gain theorem requires that Ai1,i2Ai2,i3Air,i1<1\|A_{i_{1},i_{2}}A_{i_{2},i_{3}}...A_{i_{r},i_{1}}\|<1, which is not always satisfied and irrelevant to the estimator design. A natural problem is what distributed conditions does a subsystem need to meet with its neighbors such that the overall system is stable. To address the above problem, the distributed stability conditions are derived by decomposing the centralized stability condition KC(k)A(k1)2<λ\|K_{C}(k)A(k-1)\|_{2}<\lambda in the paper. The result in Theorem 1 turns out to be the matrix inequalities for each pair of neighbors. Therefore, each subsystem only needs to satisfy these matrix inequalities with its neighbors, then the stability for the overall system can be ensured.

Remark 8. Compared with the stability analysis by Lyapunov functions [1, 27, 28], the proposed distributed stability conditions in Corollary 1 are less conservative in the requirement of weak coupling assumptions. According to the inequality (29), the strength of coupling αi,iκρ\alpha_{i,i_{\kappa}^{\rho}} can be arbitrarily large as long as the stability parameter βi\beta_{i} is designed small enough. On the other hand, the conditions in Corollary 1 can be directly applied to the distributed estimation problem when the parameters are determined by Algorithm 1 offline.

III-B Optimization-based Distributed Estimator

In what follows, we would like to design optimal estimators for time-varying interconnected systems in a distributed way. We have the following results on optimization-based distributed estimator design. First of all, let us define the following matrices:

{𝒟Pi(k):=col{[Pi(k)]1,,[Pi(k)]ni}𝒟P^i(k):=col{[P^i(k)]1,,[P^i(k)]ni}\displaystyle\begin{cases}&\!\!\!\!\!\!\!\mathcal{D}_{P_{i}}(k):=\mathrm{col}\left\{\sqrt{\left[P_{i}(k)\right]_{1}},...,\sqrt{\left[P_{i}(k)\right]_{n_{i}}}\right\}\\ &\!\!\!\!\!\!\!\mathcal{D}_{\hat{P}_{i}}(k):=\mathrm{col}\left\{\sqrt{\left[\hat{P}_{i}(k)\right]_{1}},...,\sqrt{\left[\hat{P}_{i}(k)\right]_{n_{i}}}\right\}\end{cases} (36)

where P^i(k)\hat{P}_{i}(k) is an upper bound of Pi(k)P_{i}(k) and [Pi(k)]τ\left[P_{i}(k)\right]_{\tau} is the τ\tauth diagonal element of Pi(k)P_{i}(k). Then, the gain design for the proposed distributed estimator (5) is provided in the following Theorem.

Theorem 2. For the time-varying interconnected system (1), the gain matrix Kiopt(k)K_{i}^{\mathrm{opt}}(k) of the proposed distributed estimator (5) is obtained by minimizing an upper bound of estimation error covariance and keeping the designed estimator mean-square uniformly bounded, as the following optimization problem:

minKi(k)Tr{G^i(k)}\displaystyle\min_{K_{i}(k)}\mathrm{Tr}\{\hat{G}_{i}(k)\} (37)
s.t.{[G^i(k)KCi(k)P^ip(k)Ki(k)Di(k)QviP^ip(k)0Qvi]<0(18)or(28)\displaystyle\mathrm{s.t.}\ \begin{cases}&\!\!\!\!\!\!\begin{bmatrix}&\!\!\!\!\!\!-\hat{G}_{i}(k)&K_{C_{i}}(k)\hat{P}^{p}_{i}(k)&K_{i}(k)D_{i}(k)Q_{v_{i}}\\ &*&-\hat{P}^{p}_{i}(k)&0\\ &*&*&-Q_{v_{i}}\\ \end{bmatrix}\!\!<\!0\\ &\!\!\!\!\!\!(\ref{E28})\ \ \text{or}\ \ (\ref{E39})\end{cases}

where P^ip(k)\hat{P}^{p}_{i}(k) is an upper bound of one-step prediction error covariance and is calculated as

P^ip(k)=Ai(k1)P^i(k1)AiT(k1)+iκρΩiAi(k1)𝒟P^i(k1)𝒟P^iκρT(k1)Ai,iκρT(k1)+iκρΩiAi,iκρ(k1)𝒟P^iκρ(k1)𝒟P^iT(k1)AiT(k1)+iκ1ρΩiiκ2ρΩi{Ai,iκ1ρ(k1)𝒟P^iκ1ρ(k1)×𝒟P^iκ2ρT(k1)Ai,iκ2ρT(k1)}+Γi(k1)QwiΓiT(k1)\displaystyle\begin{aligned} &\hat{P}^{p}_{i}(k)=A_{i}(k-1)\hat{P}_{i}(k-1)A_{i}^{\mathrm{T}}(k-1)\\ &+\!\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}\!A_{i}(k\!-\!1)\mathcal{D}_{\hat{P}_{i}}(k-1)\mathcal{D}^{\mathrm{T}}_{\hat{P}_{i_{\kappa}^{\rho}}}(k-1)A_{i,i_{\kappa}^{\rho}}^{\mathrm{T}}(k-1)\\ &+\!\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}\!A_{i,i_{\kappa}^{\rho}}(k\!-\!1)\mathcal{D}_{\hat{P}_{i_{\kappa}^{\rho}}}(k-1)\mathcal{D}^{\mathrm{T}}_{\hat{P}_{i}}(k-1)A_{i}^{\mathrm{T}}(k\!-\!1)\\ &+\!\!\sum_{i_{\kappa_{1}}^{\rho}\in\Omega_{i}}\sum_{i_{\kappa_{2}}^{\rho}\in\Omega_{i}}\left\{A_{i,i_{\kappa_{1}}^{\rho}}(k-1)\mathcal{D}_{\hat{P}_{i_{\kappa_{1}}^{\rho}}}(k-1)\right.\\ &\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times\mathcal{D}^{\mathrm{T}}_{\hat{P}_{i_{\kappa_{2}}^{\rho}}}(k-1)A_{i,i_{\kappa_{2}}^{\rho}}^{\mathrm{T}}(k-1)\right\}\\ &+\Gamma_{i}(k-1)Q_{w_{i}}\Gamma_{i}^{\mathrm{T}}(k-1)\end{aligned} (38)

with the upper bound of estimation error covariance P^i(k1)\hat{P}_{i}(k-1) calculated as

P^i(k1)=[IKiopt(k1)Ci(k1)]P^ip(k2)×[IKiopt(k1)Ci(k1)]T+Kiopt(k1)Di(k1)QviDiT(k1)[Kiopt(k1)]T\displaystyle\begin{aligned} &\hat{P}_{i}(k-1)=\left[I-K_{i}^{\mathrm{opt}}(k-1)C_{i}(k-1)\right]\hat{P}_{i}^{p}(k-2)\\ &\times\left[I-K_{i}^{\mathrm{opt}}(k-1)C_{i}(k-1)\right]^{\mathrm{T}}\\ &+K_{i}^{\mathrm{opt}}(k\!-\!1)D_{i}(k\!-\!1)Q_{v_{i}}D_{i}^{\mathrm{T}}(k\!-\!1)\!\!\left[K_{i}^{\mathrm{opt}}(k\!-\!1)\right]^{\mathrm{T}}\end{aligned} (39)

Proof. The cross-covariances Pi,j(k)P_{i,j}(k) among subsystems are difficult to online calculate by local communication, which means that direct calculation of the one-step prediction error covariance Pip(k)P^{p}_{i}(k) in (7) is not feasible. Therefore, an upper bound of the estimation error covariance P^i(k)Pi(k)\hat{P}_{i}(k)\geq P_{i}(k) is constructed and used for the gain design problem. Let [x~i(k)]τ1\left[\tilde{x}_{i}(k)\right]_{\tau_{1}}\in\mathbb{R} be the τ1\tau_{1}th component of x~i(k)\tilde{x}_{i}(k), while [x~j(k)]τ2\left[\tilde{x}_{j}(k)\right]_{\tau_{2}} is defined as the τ2\tau_{2}th component of x~j(k)\tilde{x}_{j}(k). By resorting to the well-known Hölder inequality, one has that

E{{[x~i(k)]τ1[x~j(k)]τ2}\displaystyle\mathrm{E}\left\{\{\left[\tilde{x}_{i}(k)\right]_{\tau_{1}}\left[\tilde{x}_{j}(k)\right]_{\tau_{2}}\right\} E{|[x~i(k)]τ1[x~j(k)]τ2|}\displaystyle\leq\mathrm{E}\left\{\left\lvert\left[\tilde{x}_{i}(k)\right]_{\tau_{1}}\left[\tilde{x}_{j}(k)\right]_{\tau_{2}}\right\rvert\right\} (40)
E{[x~i(k)]τ12}E{[x~j(k)]τ22}\displaystyle\!\!\!\!\!\!\!\!\!\!\!\!\!\leq\sqrt{\mathrm{E}\left\{\left[\tilde{x}_{i}(k)\right]_{\tau_{1}}^{2}\right\}}\sqrt{\mathrm{E}\left\{\left[\tilde{x}_{j}(k)\right]_{\tau_{2}}^{2}\right\}}

Thus, the following upper bound of Pi,j(k)P_{i,j}(k) is derived:

Pi,j(k)𝒟Pi(k)𝒟PjT(k)P_{i,j}(k)\leq\mathcal{D}_{P_{i}}(k)\mathcal{D}^{\mathrm{T}}_{P_{j}}(k) (41)

Then, applying the inequality (41) to (7), it turns out that

Pip(k)Ai(k1)Pi(k1)AiT(k1)+iκρΩiAi(k1)𝒟Pi(k1)𝒟PiκρT(k1)Ai,iκρT(k1)+iκρΩiAi,iκρ(k1)𝒟Piκρ(k1)𝒟PiT(k1)AiT(k1)+iκ1ρΩiiκ2ρΩi{Ai,iκ1ρ(k1)𝒟Piκ1ρ(k1)×𝒟Piκ2ρT(k1)Ai,iκ2ρT(k1)}+Γi(k1)QwiΓiT(k1)\displaystyle\begin{aligned} &P_{i}^{p}(k)\leq A_{i}(k-1)P_{i}(k-1)A_{i}^{\mathrm{T}}(k-1)\\ &+\!\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}\!A_{i}(k\!-\!1)\mathcal{D}_{P_{i}}(k-1)\mathcal{D}^{\mathrm{T}}_{P_{i_{\kappa}^{\rho}}}(k-1)A_{i,i_{\kappa}^{\rho}}^{\mathrm{T}}(k-1)\\ &+\!\sum_{i_{\kappa}^{\rho}\in\Omega_{i}}\!\!A_{i,i_{\kappa}^{\rho}}(k\!-\!1)\mathcal{D}_{P_{i_{\kappa}^{\rho}}}(k-1)\mathcal{D}^{\mathrm{T}}_{P_{i}}(k-1)A_{i}^{\mathrm{T}}(k\!-\!1)\\ &+\!\sum_{i_{\kappa_{1}}^{\rho}\in\Omega_{i}}\sum_{i_{\kappa_{2}}^{\rho}\in\Omega_{i}}\left\{A_{i,i_{\kappa_{1}}^{\rho}}(k-1)\mathcal{D}_{P_{i_{\kappa_{1}}^{\rho}}}(k-1)\right.\\ &\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times\mathcal{D}^{\mathrm{T}}_{P_{i_{\kappa_{2}}^{\rho}}}(k-1)A_{i,i_{\kappa_{2}}^{\rho}}^{\mathrm{T}}(k-1)\right\}\\ &+\Gamma_{i}(k-1)Q_{w_{i}}\Gamma_{i}^{\mathrm{T}}(k-1)\\ \end{aligned} (42)

Therefore, an upper bound of Pip(k)P_{i}^{p}(k) is constructed by Pip(k)P^ip(k)P_{i}^{p}(k)\leq\hat{P}_{i}^{p}(k). In this case, an upper bound of local estimation error covariance is derived as

P^i(k)=\displaystyle\hat{P}_{i}(k)= KCi(k)P^ip(k)[KCi(k)]T\displaystyle K_{C_{i}}(k)\hat{P}_{i}^{p}(k)\left[K_{C_{i}}(k)\right]^{\mathrm{T}} (43)
+Ki(k)Di(k)QviDiT(k)KiT(k)\displaystyle+K_{i}(k)D_{i}(k)Q_{v_{i}}D_{i}^{\mathrm{T}}(k)K_{i}^{\mathrm{T}}(k)

Then, it is proposed to construct an upper bound of P^i(k)\hat{P}_{i}(k) as G^i(k)\hat{G}_{i}(k) satisfying

P^i(k)G^i(k)<0\hat{P}_{i}(k)-\hat{G}_{i}(k)<0 (44)

The optimal estimator gain is obtained by minimizing this upper bound G^i(k)\hat{G}_{i}(k), which turns to be an optimization problem:

Kiopt(k)=argminKi(k)Tr{G^i(k)}\displaystyle K_{i}^{\mathrm{opt}}(k)=\arg\min_{K_{i}(k)}\mathrm{Tr}\{\hat{G}_{i}(k)\} (45)
s.t.P^i(k)G^i(k)<0\displaystyle\mathrm{s.t.}\ \hat{P}_{i}(k)-\hat{G}_{i}(k)<0

By Schur complement lemma, the inequality constraint in (45) is converted into

[Ki(k)Di(k)QviDiT(k)KiT(k)Gi(k)KCi(k)[P^ip(k)]1]<0\displaystyle\begin{bmatrix}K_{i}(k)D_{i}(k)Q_{v_{i}}D_{i}^{\mathrm{T}}(k)K_{i}^{\mathrm{T}}(k)-G_{i}(k)&K_{C_{i}}(k)\\ *&-\left[\hat{P}^{p}_{i}(k)\right]^{-1}\end{bmatrix}<0 (46)

Then, the first inequality constraint in (37) is further derived by using Schur complement lemma again. Adding the distributed stability constraints in Theorem 1 or Corollary 1, the optimization problem in Theorem 2 is formulated. This completes the proof.

Remark 9. The inequality constraints (18) and (28) can be rewritten as linear matrix inequality forms, so the optimization problem in Theorems 2 can be directly solved by the function “mincx” of MATLAB LMI toolbox [36]. In addition, the information used in the optimization problem (37) is from subsystem 𝐒i\mathbf{S}_{i} and its neighbors, and the computational complexity is mainly determined by the dimensions of these subsystems. Therefore, the proposed estimators are recursive and fully distributed such that can be deployed for large-scale interconnected systems with local communication and computation requirements.

Remark 10. Notice that the work in [19, 20] only provides stability conditions with specific subsystem coupling structures, while the designed distributed estimators in Theorem 2 directly use the newly proposed subsystem-level stability conditions for general interconnected systems. This design methdology that combines the optimality and stability can overcome the disadvantages of totally local estimator analysis in terms of the stability problem. Moreover, the developed stability conditions enable plug-and-play operations, which means newly added subsystem does not influence the stability of previous subsystems and its own stability can be ensured by collecting its neighbors’ information. Thus, there is no need to redesign the stability parameters and this property is helpful for deployment of distributed estimators.

Algorithm 2 Distributed Estimation for Time-varying interconnected systems
1:  if Communication is one-step delayed then
2:     Offline calculation of βi\beta_{i} by Algorithm 1;
3:  end if
4:  for i:=1i:=1 to LL do
5:     Subsystem 𝐒i\mathbf{S}_{i} collects local measurement yi(k)y_{i}(k), neighbors’ estimated states x^iκρ(k1)\hat{x}_{i_{\kappa}^{\rho}}(k-1) and error covariance bounds P^iκρ(k1)\hat{P}_{i_{\kappa}^{\rho}}(k\!-\!1), and Kiκσ(k)(iκσΣi(k1))K_{i_{\kappa}^{\sigma}}(k)\ \ (i_{\kappa}^{\sigma}\in\Sigma_{i}(k-1));
6:     Calculate P^ip(k)\hat{P}^{p}_{i}(k) by (38);
7:     if Communication is one-step delayed then
8:        Determine the estimator gain Ki(k)K_{i}(k) by solving the optimization problem (37) with constraints in (28);
9:     else
10:        Determine the estimator gain Ki(k)K_{i}(k) by solving the optimization problem (37) with constraints in (18);
11:     end if
12:     Calculate P^i(k)\hat{P}_{i}(k) by (39);
13:     Calculate the distributed estimate x^i(k)\hat{x}_{i}(k) by (5);
14:     Subsystem 𝐒i\mathbf{S}_{i} sends the calculated x^i(k)\hat{x}_{i}(k), P^i(k)\hat{P}_{i}(k) (only for Gaussian noise situation) and Ki(k)K_{i}(k) to its neighbors.
15:  end for
16:  Return to Step 4 and implement Steps 4-15 for calculating x^i(k+1)(i=1,2,,l)\hat{x}_{i}(k+1)(i=1,2,...,l).

From Theorem 2, the computational procedures of the distributed estimation for general interconnected systems with and without one-step communication delay can be summarized by Algorithm 2. For systems with ideal communication, the real-time transmission of estimator gains is feasible and the required information for the inequality constraints in (18) can be obtained timely. However, the stability conditions in (18) will not work any more when one-step communication delay is taken into consideration. Instead, offline calculation of βi\beta_{i} for the inequality constraints in (28) can solve the problem caused by communication delay.

Refer to caption
Figure 2: The couplings among subsystems for an interconnected system.

IV Simulation Examples

Refer to caption
Figure 3: The trajectories of the states and the corresponding estimated values by Theorem 2 for the interconnected system.
Refer to caption
Figure 4: MSE comparison of the distributed estimators in Algorithms 2 with and without communication delay.
Refer to caption
Figure 5: AMSE performance under different coupling strengths.

To illustrate the effectiveness of the proposed distributed estimators, a numerical study result is reported in this section. Let us consider the following interconnected system with three subsystems:

𝐒:{x1(k+1)=A1(k)x1(k)+gA1,3x3(k)+Γ1w1(k)x2(k+1)=A2(k)x2(k)+gA2,1x1(k)+Γ2w2(k)x3(k+1)=A3(k)x3(k)+gA3,2x2(k)+Γ3w3(k)y1(k)=C1(k)x1(k)+D1v1(k)y2(k)=C2(k)x2(k)+D2v2(k)y3(k)=C3(k)x3(k)+D3v3(k)\mathbf{S}\!:\!\begin{cases}&\!\!\!\!\!\!x_{1}(k+1)\!=\!A_{1}(k)x_{1}(k)+gA_{1,3}x_{3}(k)+\Gamma_{1}w_{1}(k)\\ &\!\!\!\!\!\!x_{2}(k+1)\!=\!A_{2}(k)x_{2}(k)+gA_{2,1}x_{1}(k)+\Gamma_{2}w_{2}(k)\\ &\!\!\!\!\!\!x_{3}(k+1)\!=\!A_{3}(k)x_{3}(k)+gA_{3,2}x_{2}(k)+\Gamma_{3}w_{3}(k)\\ &\!\!\!\!\!\!y_{1}(k)=C_{1}(k)x_{1}(k)+D_{1}v_{1}(k)\\ &\!\!\!\!\!\!y_{2}(k)=C_{2}(k)x_{2}(k)+D_{2}v_{2}(k)\\ &\!\!\!\!\!\!y_{3}(k)=C_{3}(k)x_{3}(k)+D_{3}v_{3}(k)\end{cases} (47)

where

{A1(k)=[0.20.2+0.2cos(k)0.2+0.1sin(k)0.2]A2(k)=[0.30.1+0.3cos(k)0.2+0.2sin(k)0.2]A3(k)=[0.30.1+0.2sin(k)0.1+0.1cos(k)0.2]C1(k)=[0.3+0.3cos(k)0.4]C2(k)=[0.6+0.2cos(k)0.30.20.7+0.1sin(k)]C3(k)=[0.5+0.1sin(k)0.30.10.7+0.1cos(k)]A1,3=A2,1=A3,2=[0.1000.1]\begin{cases}&\!\!\!\!\!\!A_{1}(k)=\begin{bmatrix}0.2&0.2+0.2\cos(k)\\ 0.2+0.1\sin(k)&0.2\end{bmatrix}\\ &\!\!\!\!\!\!A_{2}(k)=\begin{bmatrix}0.3&0.1+0.3\cos(k)\\ 0.2+0.2\sin(k)&0.2\end{bmatrix}\\ &\!\!\!\!\!\!A_{3}(k)=\begin{bmatrix}0.3&0.1+0.2\sin(k)\\ 0.1+0.1\cos(k)&0.2\end{bmatrix}\\ &\!\!\!\!\!\!C_{1}(k)=\begin{bmatrix}0.3+0.3\cos(k)&0.4\end{bmatrix}\\ &\!\!\!\!\!\!C_{2}(k)=\begin{bmatrix}0.6+0.2\cos(k)&0.3\\ 0.2&0.7+0.1\sin(k)\end{bmatrix}\\ &\!\!\!\!\!\!C_{3}(k)=\begin{bmatrix}0.5+0.1\sin(k)&0.3\\ 0.1&0.7+0.1\cos(k)\end{bmatrix}\\ &\!\!\!\!\!\!A_{1,3}=A_{2,1}=A_{3,2}=\begin{bmatrix}0.1&0\\ 0&0.1\end{bmatrix}\end{cases} (48)

and Γi\Gamma_{i} and DiD_{i} are identity matrices. The parameter gg is used to adjust the strength of couplings, and the coupling structure is described in Fig. 2. The process noise wi(k)(i{1,2,3})w_{i}(k)\ (i\in\{1,2,3\}) is Gaussian noises with covariance diag{0.1,0.1}\mathrm{diag}\{0.1,0.1\}, and the measurement noises vi(k)(i=1,2,3)v_{i}(k)\ (i=1,2,3) are Gaussian noises with covariances 0.10.1, diag{0.1,0.1}\mathrm{diag}\{0.1,0.1\} and diag{0.1,0.1}\mathrm{diag}\{0.1,0.1\}, respectively. The values for βi\beta_{i} are calculated for different coupling strengths by tuning the parameter gg, and the result is shown in Table 1.

TABLE I: Values of βi\beta_{i} under Different Coupling Strength
Coupling Strength gg 0.5 1 1.5 2 2.5 3 3.5 4
β1\beta_{1} 1.08 1.08 1.08 1.08 1.08 1.08 1.08 1.08
β2\beta_{2} 1.21 1.01 0.90 0.81 0.75 0.70 0.66 0.63
β3\beta_{3} 1.84 1.57 1.36 1.19 1.05 0.94 0.86 0.78

As the connection of subsystems gets more and more strong, the calculated value of βi\beta_{i} decreases such that the stability conditions become stricter. This observation is consistent with the fact that the convergence rate and stability margin of the overall system is related to the size of its transition matrix.

Then, the proposed optimization-based distributed estimators are deployed to estimate the states of this interconnected system. Under one-step communication delay, the trajectories of the states and the corresponding estimated values by Theorem 2 for this interconnected system are plotted in Fig. 3 when g=4g=4. As shown in Fig. 3, the proposed distributed estimator can track the real states well under a large coupling strength with communication delay. To compare the performance of the distributed estimators with and without the influence of one-step communication delay, Monte Carlo simulations with 100 runs have been performed by randomly varying the realization of process and measurement noises. The mean square error (MSE) is introduced to evaluate the performance, where

MSE(k)=s=1Ses(k)2S\mathrm{MSE}(k)=\sum_{s=1}^{S}\frac{\|e_{s}(k)\|^{2}}{S} (49)

with es(t)e_{s}(t) being the state estimation error at the instant kk in the ssth simulation. Fig. 4 depicts the MSE performance comparison for two cases in Algorithms 2. The result shows that the estimation accuracy can maintain at a satisfactory level when one-step communication delay is taken into consideration. Moreover, to evaluate the dependence of performance on different coupling strengths, the AMSE (i.e., the asymptotic MSE defined as the average of the MSE computed in the whole time interval) is reported in Fig. 5. As the stability constraints are getting stricter under stronger couplings, the estimation accuracy is becoming worse. The performance degradation is caused by its conservatism from time-invariant stability parameters.

V Conclusion

In this paper, we presented new results for subsystem-level stability analysis and distributed estimator design for time-varying interconnected systems with arbitrary coupling structures. The proposed distributed stability conditions can ensure mean-square uniform boundedness without the requirement for the knowledge of dynamics and couplings from the overall interconnected systems. Then, the simplified conditions that do not need real-time exchange of subsystems’ gain information were developed for systems with one-step communication delay. Particularly, we showed that the distributed stability conditions do not need any coupling structure assumption and can be easily extended when a new subsystem is added to the original interconnected system. These conditions are applied to distributed estimator design problem for time-varying interconnected systems, and novel optimization-based estimator design approaches were proposed. Notice that the designed estimators are fully distributed, where only local information and the information from neighbors are required for the estimator iteration form and the stability conditions. Finally, an illustrative example was employed to show the effectiveness of the proposed methods.

Several topic for future research is left open. Extensions of the presented distributed stability conditions to co-design of distributed estimator and controller will be important. Another interesting extension is the development of secure estimator for interconnected systems with stability constraints. Due to the frequent information exchange among subsystems and the broadcast nature of communication medium, it is more vulnerable for practical interconnected systems to various attacks. To prevent system information from being collected by eavesdroppers to generate sophisticated attacks, the design of defense mechanisms is required and will be one of our future work. Meanwhile, the influence of cyber attacks will propagate among subsystems, and how to detect cyber attacks by subsystem cooperation is an important and interesting problem.

VI Appendix

Proof of Proposition 1. From the conditions in (14), an upper bound of KC(k)K_{C}(k) can be derived as

KC(k)2<1+ηδc,\|K_{C}(k)\|_{2}<1+\eta\delta_{c}, (50)

By the augmented estimation error covariance in (13), if P(k)2P(k+1)2\|P(k)\|_{2}\leq\|P(k+1)\|_{2}, then

0\displaystyle 0 (λ21)P(k)2\displaystyle\leq\left(\lambda^{2}-1\right)\|P(k)\|_{2} (51)
+η2δd2Qv2+(1+ηδc)2δγ2Qw2\displaystyle\ \ \ \ +\eta^{2}\delta_{d}^{2}\|Q_{v}\|_{2}+\left(1+\eta\delta_{c}\right)^{2}\delta_{\gamma}^{2}\|Q_{w}\|_{2}

It turns out that

P(k)2η2δd2Qv2+(1+ηδc)2δγ2Qw21λ2:=δp1\|P(k)\|_{2}\leq\frac{\eta^{2}\delta_{d}^{2}\|Q_{v}\|_{2}+\left(1+\eta\delta_{c}\right)^{2}\delta_{\gamma}^{2}\|Q_{w}\|_{2}}{1-\lambda^{2}}:=\delta_{p^{1}} (52)

If P(k1)2P(k)2\|P(k-1)\|_{2}\leq\|P(k)\|_{2}, then P(k1)2δp1\|P(k-1)\|_{2}\leq\delta_{p^{1}} and

P(k)2\displaystyle\|P(k)\|_{2} λ2δp1+η2δd2Qv2+(1+ηδc)2δγ2Qw2\displaystyle\leq\lambda^{2}\delta_{p^{1}}+\eta^{2}\delta_{d}^{2}\|Q_{v}\|_{2}\!+\!\left(1+\eta\delta_{c}\right)^{2}\delta_{\gamma}^{2}\|Q_{w}\|_{2} (53)
:=fp(δp1)\displaystyle:=f_{p}(\delta_{p^{1}})

If P(k1)2P(k)2P(k+1)2\|P(k-1)\|_{2}\geq\|P(k)\|_{2}\geq\|P(k+1)\|_{2} at all instants, then P(k)2P(k1)2P(k0)2:=δp0\|P(k)\|_{2}\leq\|P(k-1)\|_{2}\leq...\leq\|P(k_{0})\|_{2}:=\delta_{p^{0}}. Now, we can conclude that P(k)2\|P(k)\|_{2} is bounded as

P(k)2max{δp1,fp(δp1),δp0}\|P(k)\|_{2}\leq\max\{\delta_{p^{1}},f_{p}(\delta_{p^{1}}),\delta_{p^{0}}\} (54)

By the boundedness of P(k)2\|P(k)\|_{2}, one also has that Pi(k)2\|P_{i}(k)\|_{2} is bounded. This completes the proof.

References

  • 1 F. N. Bailey, “The application of Lyapunov’s second method to interconnected systems,” Journal of the Society for Industrial and Applied Mathematics, Series A: Control, vol. 3, no. 3, pp. 443–462, 1965.
  • 2 V. Kekatos and G. B. Giannakis, “Distributed robust power system state estimation,” IEEE Transactions on Power Systems, vol. 28, no. 2, pp. 1617–1626, 2013.
  • 3 Z. Feng, G. Hu, Y. Sun, and J. Soon, “An overview of collaborative robotic manipulation in multi-robot systems,” Annual Reviews in Control, vol. 49, pp. 113–127, 2020.
  • 4 M. Dickison, S. Havlin, and H. E. Stanley, “Epidemics on interconnected networks,” Physical Review. E, Statistical, Nonlinear, and Soft Matter Physics, vol. 85, no. 6, p. 066109, 2012.
  • 5 W. Li, Y. Jia, and J. Du, “State estimation for stochastic complex networks with switching topology,” IEEE Transactions on Automatic Control, vol. 62, no. 12, pp. 6377–6384, 2017.
  • 6 Y. Huang, I. Tienda-Luna, and Y. Wang, “Reverse engineering gene regulatory networks,” IEEE Signal Processing Magazine, vol. 26, no. 1, pp. 76–97, 2009.
  • 7 J. Lian, “Special section on control of complex networked systems (CCNS): Recent results and future trends,” Annual Reviews in Control, vol. 47, pp. 275–277, 2019.
  • 8 C. Kwon and I. Hwang, “Sensing-based distributed state estimation for cooperative multiagent systems,” IEEE Transactions on Automatic Control, vol. 64, no. 6, pp. 2368–2382, 2019.
  • 9 P. Yang, R. A. Freeman, and K. M. Lynch, “Multi-agent coordination by decentralized estimation and control,” IEEE Transactions on Automatic Control, vol. 53, no. 11, pp. 2480–2496, 2008.
  • 10 R. Olfati-Saber, “Distributed Kalman filtering for sensor networks,” in 2007 46th IEEE Conference on Decision and Control, (New Orleans, LA, USA), pp. 5492–5498, IEEE, Dec. 2007.
  • 11 B. Chen, W. A. Zhang, and L. Yu, “Distributed finite-horizon fusion Kalman filtering for bandwidth and energy constrained wireless sensor networks,” IEEE Transactions on Signal Processing, vol. 62, no. 4, pp. 797–812, 2014.
  • 12 W. A. Zhang and L. Shi, “Sequential fusion estimation for clustered sensor networks,” Automatica, vol. 89, pp. 358–363, 2018.
  • 13 C. Sanders, E. Tacker, T. Linton, and R. Ling, “Specific structures for large-scale state estimation algorithms having information exchange,” IEEE Transactions on Automatic Control, vol. 23, no. 2, pp. 255–261, 1978.
  • 14 A. Haber and M. Verhaegen, “Moving horizon estimation for large-scale interconnected systems,” IEEE Transactions on Automatic Control, vol. 58, no. 11, pp. 2834–2847, 2013.
  • 15 U. A. Khan, “Distributing the Kalman filter for large-scale systems,” IEEE Transactions on Signal Processing, vol. 56, no. 10, pp. 4919–4935, 2008.
  • 16 S. S. Stanković, M. S. Stanković, and D. M. Stipanović, “Consensus based overlapping decentralized estimation with missing observations and communication faults,” Automatica, vol. 45, no. 6, pp. 1397–1406, 2009.
  • 17 B. Chen, G. Hu, D. W. C. Ho, and L. Yu, “Distributed Kalman filtering for time-varying discrete sequential systems,” Automatica, vol. 99, pp. 228–236, 2019.
  • 18 B. Chen, G. Hu, D. W. Ho, and L. Yu, “Distributed estimation for discrete-time interconnected systems,” in 2019 Chinese Control Conference (CCC), (Guangzhou, China), pp. 3708–3714, IEEE, July 2019.
  • 19 B. Chen, G. Hu, D. W. C. Ho, and L. Yu, “Distributed estimation and control for discrete time-varying interconnected systems,” IEEE Transactions on Automatic Control, 2021. doi: 10.1109/TAC.2021.3075198.
  • 20 Y. Zhang, B. Chen, L. Yu, and D. W. C. Ho, “Distributed Kalman filtering for interconnected dynamic systems,” IEEE Transactions on Cybernetics, 2021. doi: 10.1109/TCYB.2021.3072198.
  • 21 M. Farina, G. Ferrari-Trecate, and R. Scattolini, “Moving horizon partition-based state estimation of large-scale systems,” Automatica, vol. 46, no. 5, pp. 910–918, 2010.
  • 22 S. Riverso, M. Farina, R. Scattolini, and G. Ferrari-Trecate, “Plug-and-play distributed state estimation for linear systems,” in 52nd IEEE Conference on Decision and Control, (Firenze), pp. 4889–4894, IEEE, 2013.
  • 23 S. Riverso, D. Rubini, and G. Ferrari-Trecate, “Distributed bounded-error state estimation based on practical robust positive invariance,” International Journal of Control, vol. 88, no. 11, pp. 2277–2290, 2015.
  • 24 N. Sandell, P. Varaiya, M. Athans, and M. Safonov, “Survey of decentralized control methods for large scale systems,” IEEE Transactions on Automatic Control, vol. 23, no. 2, pp. 108–128, 1978.
  • 25 A. N. Michel, “On the status of stability of interconnected systems,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 13, no. 4, pp. 439–453, 1983.
  • 26 H. Ito, “A geometrical formulation to unify construction of Lyapunov functions for interconnected iISS systems,” Annual Reviews in Control, vol. 48, pp. 195–208, 2019.
  • 27 A. N. Michel, “Stability analysis of interconnected systems,” SIAM Journal on Control, vol. 12, no. 3, pp. 554–579, 1974.
  • 28 W. M. Haddad and S. G. Nersesov, Stability and Control of Large-Scale Dynamical Systems: A Vector Dissipative Systems Approach. Princeton: Princeton University Press, 2011.
  • 29 S. N. Dashkovskiy, B. S. R¨uffer, and F. R. Wirth, “Small gain theorems for large scale systems and construction of ISS Lyapunov functions,” SIAM Journal on Control and Optimization, vol. 48, no. 6, pp. 4089–4118, 2010.
  • 30 H. Ito, “State-dependent scaling problems and stability of interconnected iISS and ISS systems,” IEEE Transactions on Automatic Control, vol. 51, no. 10, pp. 1626–1643, 2006.
  • 31 P. Moylan and D. Hill, “Stability criteria for large-scale systems,” IEEE Transactions on Automatic Control, vol. 23, no. 2, pp. 143–149, 1978.
  • 32 M. Vidyasagar, ed., Input-output analysis of large-scale interconnected systems. Decomposition, well-posedness and stability. Berlin/Heidelberg: Springer-Verlag, 1981.
  • 33 E. Agarwal, S. Sivaranjani, V. Gupta, and P. J. Antsaklis, “Distributed synthesis of local controllers for networked systems with arbitrary interconnection topologies,” IEEE Transactions on Automatic Control, vol. 66, no. 2, pp. 683–698, 2021.
  • 34 A. A. Alam, A. Gattami, and K. H. Johansson, “An experimental study on the fuel reduction potential of heavy duty vehicle platooning,” in 13th International IEEE Conference on Intelligent Transportation Systems, pp. 306–311, Sept. 2010.
  • 35 J. Yang, W. A. Zhang, and F. Guo, “Dynamic state estimation for power networks by distributed unscented information filter,” IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2162–2171, 2020.
  • 36 S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, eds., Linear Matrix Inequalities in System and Control Theory. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 1994.