Stability of FFLS-based Diffusion Adaptive Filter Under Cooperative Excitation Condition
Abstract
In this paper, we consider the distributed filtering problem over sensor networks such that all sensors cooperatively track unknown time-varying parameters by using local information. A distributed forgetting factor least squares (FFLS) algorithm is proposed by minimizing a local cost function formulated as a linear combination of accumulative estimation error. Stability analysis of the algorithm is provided under a cooperative excitation condition which contains spatial union information to reflect the cooperative effect of all sensors. Furthermore, we generalize theoretical results to the case of Markovian switching directed graphs. The main difficulties of theoretical analysis lie in how to analyze properties of the product of non-independent and non-stationary random matrices. Some techniques such as stability theory, algebraic graph theory and Markov chain theory are employed to deal with the above issue. Our theoretical results are obtained without relying on the independency or stationarity assumptions of regression vectors which are commonly used in existing literature.
Distributed forgetting factor least squares, cooperative excitation condition, exponential stability, stochastic dynamic systems, Markovian switching topology
1 Introduction
Owing to the capability to process the collaborative data, wireless sensor networks (WSNs) have attracted increasing research attention in diverse areas, including consensus seeking [1][2], resource allocation [3][4], and formation control [5][6]. How to design the distributed adaptive estimation and filtering algorithms to cooperatively estimate unknown parameters has become one of the most important research topics. Compared with centralized estimation algorithms where a fusion center is needed to collect and process information measured by all sensors, the distributed ones can estimate or track an unknown parameter process of interest cooperatively by using local noisy measurements. Therefore, the distributed algorithms are easier to be implemented because of their robustness to network link failure, privacy protection, and reduction on communication and computation costs.
Based on classical estimation algorithms and typical distributed strategies such as the incremental, diffusion and consensus, a number of distributed adaptive estimation or filtering algorithms have been investigated (cf., [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]), e.g., the consensus-based least mean squares (LMS), the diffusion Kalman filter (KF), the diffusion least squares (LS), the incremental LMS, the combination of diffusion and consensus stochastic gradient (SG), the diffusion forgetting factor least squares (FFLS). Performance analysis of the distributed algorithms is also studied under some information conditions. For deterministic signals or deterministic system matrices, Battistelli and Chisci in [7] provided the mean-square boundedness of the state estimation error of the distributed Kalman filter algorithm under a collectively observable condition. Chen et al. in [8] studied the convergence of distributed adaptive identification algorithm under a cooperative persistent excitation (PE) condition. Javed et al. in [9] presented stability analysis of the cooperative gradient algorithm for the deterministic regression vectors satisfying a cooperative PE condition. Note that the signals are often random since they are generated from dynamic systems affected by noises. For the random regression vector case, Barani et al. in [10] studied the convergence of distributed stochastic gradient descent algorithm with independent and identically distributed (i.i.d.) signals. Schizas et al. in [11] provided the stability analysis of a distributed LMS-type adaptive algorithm under the strictly stationary and ergodic regression vectors. Zhang et al. in [12] studied the mean square performance of a diffusion FFLS algorithm with independent input signals. Takahashi et al. in [13] established the performance analysis of the diffusion LMS algorithm for i.i.d. regression vectors. Lei and Chen in [14] established the convergence analysis of the distributed stochastic approximation algorithm with ergodic system signals. Mateos and Giannakis in [15] presented the stability and performance analysis of the distributed FFLS algorithm under the spatio-temporally white regression vectors condition.
We remark that most theoretical results mentioned in the above literature were established by requiring regression vectors to be either deterministic and satisfy PE conditions, or random but satisfy independency, stationarity and ergodicity conditions. In fact, the observed data are often random and hard to satisfy the above statistical assumptions, since they are generated by complex dynamic systems where feedback loops inevitably exist (cf., [20]). The main difficulty in performance analysis of distributed algorithms is to analyze the product of random matrices involved in estimation error equations. In order to relax the above stringent conditions on random regression vectors, some progress has been made on distributed adaptive estimation and filtering algorithms under undirected graphs. For estimating time-invariant parameters, the convergence analysis of distributed SG algorithm and distributed LS algorithm is provided in [21] and [22] under cooperative excitation conditions. For tracking a time-varying parameter, Xie and Guo in [16] and [23] proposed the weakest possible cooperative information conditions to guarantee the stability and performance of consensus-based and diffusion-based LMS algorithms. Compared with LMS algorithm, FFLS algorithm can generate more accurate estimates in the transient phase (see e.g.,[24]), and the stability analysis for the distributed FFLS algorithm is still lacking. In this paper, we focus on the design and stability analysis of distributed FFLS algorithm without relying on the independency, stationarity or ergodicity assumptions on regression vectors.
The information exchange between sensors is an important factor for the performance of distributed estimation algorithms, and previous studies often assume that the networks are undirected and time-invariant. In practice, they might not be bidirectional or time-invariant due to the heterogeneity of sensors and signal losses caused by the temporary deterioration in the communication link. One approach is to model the networks which randomly change over time as an i.i.d. process, see e.g., [25, 26]. However, the loss of connection usually occurs with correlations [27]. Another approach is to model the random switching process as a Markov chain whose states correspond to possible communication topologies, see [27, 28, 29, 30] among many others. Some studies on the distributed algorithms with deterministic or temporally independent measurement matrix under Markovian switching topologies are given in e.g.,[31, 32].
In this paper, we consider the distributed filtering problem over sensor networks where all sensors aim at collectively tracking an unknown randomly time-varying parameter vector. Based on the fact that recent observation data respond to the parameter changes faster than the early data, we introduce a forgetting factor into the local accumulative cost function formulated as a linear combination of local estimation errors between the observation signals and the prediction signals. By minimizing the local cost function, we propose the distributed FFLS algorithm based on the diffusion strategy over the fixed undirected graph. The stability analysis of the distributed FFLS algorithm is provided under a cooperative excitation condition. Moreover, we generalize the theoretical results to the case of Markovian switching directed sensor networks. The key difference from the fixed undirected graph case is that the adjacency matrix is an asymmetric random matrix. We employ the Markov chain theory to deal with the coupled relationship between random adjacency matrices and random regression vectors. The main contributions of this paper can be summarized as the following aspects:
-
•
In comparison with [16] and [21], the main difficulty is that the random matrices in the error equation of the diffusion FFLS algorithm are not symmetric and the adaptive gain is no longer a scalar. We establish the exponential stability of the homogeneous part of the estimation error equation and the bound of the tracking error by virtue of the specific structure of the proposed diffusion FFLS algorithm and stability theory of stochastic dynamic systems.
-
•
Different from the theoretical results of distributed FFLS algorithms in [12] and [15] where regression vectors are required to satisfy the independent or spatio-temporally uncorrelated assumptions, our theoretical analysis is obtained without relying on such stringent conditions, which makes it possible to be applied to the stochastic feedback systems.
-
•
The cooperative excitation condition introduced in this paper is a temporal and spatial union information condition on the random regression vectors, which can reveal the cooperative effect of multiple sensors in a certain sense, i.e., the whole sensor network can cooperatively finish the estimation task, even if any individual sensor cannot due to lack of necessary information.
The remainder of this paper is organized as follows. In Section 2, we give the problem formulation of this paper. Section 3 presents the distributed FFLS algorithm. The stability of the proposed algorithm under fixed undirected graph and Markovian switching directed graphs are given in Section 4 and Section 5, respectively. Finally, we conclude the paper with some remarks in Section 6.
2 Problem Formulation
2.1 Matrix theory
In this paper, we use to denote the set of -dimensional real vectors, to denote the set of real matrices with rows and columns, and to denote the -dimensional square identity matrix. For a matrix , denotes its Euclidean norm, i.e., , where the notation denotes the transpose operator and denotes the largest eigenvalue of the matrix. Correspondingly, and denote the smallest eigenvalue and the trace of the matrix, respectively. The notation is used to denote a vector stacked by the specified vectors, and is used to denote a block matrix formed in a diagonal manner of the corresponding vectors or matrices.
For a matrix , if holds for all , then it is called stochastic. The Kronecker product of two matrices and is denoted by . For two real symmetric matrices and , (, , ) means that is a semi-positive (positive, semi-negative, negative) definite matrix. For a matrix sequence and a positive scalar sequence , the equation means that there exists a positive constant independent of and such that holds for all .
The matrix inversion formula is often used in this paper and we list it as follows.
Lemma 2.1 (Matrix inversion formula [33])
For any matrices , , and with suitable dimensions, the following formula
holds, provided that the relevant matrices are invertible.
2.2 Graph theory
We use graphs to model the communication topology between sensors. A directed graph is composed of a vertex set which stands for the set of sensors (i.e., nodes), is the edge set, and is the weighted adjacency matrix. A directed edge means that the -th sensor can receive the data from the -th sensor, and sensors and are called the parent and child sensors, respectively. The elements of matrix satisfy if and otherwise. The in-degree and out-degree of sensor are defined by and respectively. The digraph is called balanced if for . Here, we assume that is a stochastic matrix. The neighbor set of is denoted as , and the sensor is also included in this set. For a given positive integer , the union of digraphs with the same node set is denoted by . A directed path from to consists of a sequence of sensors , such that for . The digraph is said to be strongly connected if for any senor there exist directed paths from this sensor to all other sensors. For the graph , if for all , then it is called an undirected graph. The diameter of the undirected graph is defined as the maximum shortest length of paths between any two sensors.
2.3 Observation model
Consider a network consisting of sensors (labeled ) whose task is to estimate an unknown time-varying parameter by cooperating with each other. We assume that the measurement at the sensor obeys the following discrete-time stochastic regression model,
(1) |
where is the scalar output of the sensor at time , is the random regression vector, is a noise process, and is the unknown -dimensional time-varying parameter whose variation at time is denoted by , i.e.,
(2) |
Note that when , becomes a constant vector. For the special case where is a moving average process and consists of current and past input-output data, i.e.,
with being the input signal of the sensor at time , then the model (1) can be reduced to ARMAX model with time-varying coefficients.
3 The distributed FFLS Algorithm
Tracking a time-varying signal is a fundamental problem in system identification and signal processing. The well-known recursive least squares estimator with a constant forgetting factor is often used to track time-varying parameters, which is defined by
(3) |
With some simple manipulations using the matrix inversion formula, we can obtain the following recursive FFLS algorithm (Algorithm 1) for an individual sensor.
For any given sensor , begin with an initial estimate and an initial positive definite matrix . The standard FFLS is recursively defined at time as follows,
However, due to the limited sensing ability of each sensor, it is often the case where the measurements obtained by each sensor can only reflect partial information of the unknown parameter. In such a case, if only local measurements of the sensor itself are utilized to perform the estimation task (see Algorithm 1), then at most part of the unknown parameter rather than the whole vector can be estimated. Thus, in this paper, we aim at designing a distributed adaptive estimation algorithm such that all sensors cooperatively track the unknown time-varying parameter by using random regression vectors and the observation signals from its neighbors. To simplify the analysis, in this section, we use a fixed undirected graph to model the communication topology of sensors.
We first introduce the following local cost function for each sensor at the time instant recursively formulated as a linear combination of its neighbors’ local estimation error between the observation signal and the prediction signal,
(4) |
with . Set
Hence by (4), we have
which implies that
(5) |
where is the -th row, -th column entry of the matrix .
By minimizing the local cost function in (5), we obtain the distributed FFLS estimate of the unknown time-varying parameter for sensor , i.e.,
(6) | |||||
Denote Then we write it into the following recursive form,
(7) |
By (6), we similarly have
(8) |
Note that in the above derivation, we assume that the matrix is invertible which is usually not satisfied for small . To solve this problem, we take the initial matrix to be positive definite. Then (7) can be modified into the following equation,
(9) |
Though, the estimate given by (8) has a slight difference with (6), which does not affect the analysis of the asymptotic properties of the estimates.
To design the distributed algorithm, we denote
(10) |
By Lemma 2.1, we have . Hence,
Therefore, we get the following distributed FFLS algorithm of diffusion type, i.e., Algorithm 2.
Input:
,
Output:
,
(11) | ||||
(12) |
(13) | ||||
(14) |
Note that when , the distributed FFLS algorithm will degenerate to the classical FFLS (i.e., Algorithm 1), and when , the distributed FFLS algorithm will degenerate to the distributed LS in [22] which is used to estimate the time-invariant parameter. The quantity is usually referred to as the speed of adaption. Intuitively, when the parameter process is slowly time-varying, the adaptation speed should also be slow (i.e., is large). The purpose of this paper is to establish the stability of the above diffusion FFLS-based adaptive filter without independence or stationarity assumptions on random regression vector .
In order to analyze the distributed FFLS algorithm, we need to derive the estimation error equation. Denote , then from (13) and (14), we have
(15) |
For convenience of analysis, we introduce the following set of notations,
Hence by (15) and (16), we have the following equation about estimation error,
(17) |
From (17), we see that the properties of product of random matrices, i.e., , play important roles in stability analysis of the homogeneous part in error equation.
As we all know, the analysis of product of random matrices is generally a difficult mathematical problem if the random matrices do not satisfy the independency or stationarity assumptions. There is some work to study this problem, which focuses on either symmetric random matrix or scalar gain case. For example, [21] and [16] investigated the convergence of consensus-diffusion SG algorithm and the stability of consensus normalized LMS algorithm where the random matrices in error equations are symmetric. Note that the random matrices here are asymmetric. Although [23] studied the properties of the asymmetric random matrices in the LMS-based estimation error equation, the adaptive gain of distributed LMS algorithm in [23] is a scalar while the gain in (11) of this paper is a random matrix. Hence the methods used in existing literature including [16, 21, 23] are no longer applicable to our case. One of the main purposes of this paper is to overcome the above difficulties by using both the specific structure of the diffusion FFLS and some results of FFLS on single sensor case (see [34]).
4 Stability of distributed FFLS algorithm under fixed undirected graph
In this section, we will establish exponential stability for the homogeneous part of the error equation (17) and the tracking error bounds for the proposed distributed FFLS algorithm in Algorithm 2 without requiring statistical independence on the system signals. For this purpose, we need to introduce some definitions on the stability of random matrices (see [34]) and assumptions on the graph and random regression vectors.
4.1 Some definitions
Definition 4.1
A random matrix sequence defined on the basic probability space is called -stable if , where denotes the mathematical expectation operator. We define as the -norm of the random matrix .
Definition 4.2
A sequence of random matrices is called -exponentially stable with parameter , if it belongs to the following set
(18) |
As demonstrated by Guo in [34], is in some sense the necessary and sufficient condition for stability of generated by . Also, the stability analysis of the matrix sequence may be reduced to that of a certain class of scalar sequence, which can be further analyzed based on some excitation conditions on the regressors. To this end, we introduce the following subset of for a scalar sequence .
The definition will be used when we convert the product of a random matrix to that of a scalar sequence.
Remark 4.1
It is clear that if there exist a constant such that for all , then . More properties about the set can be found in [35].
4.2 Assumptions
Assumption 4.1
The undirected graph is connected.
Remark 4.2
Assumption 4.2 (Cooperative Excitation Condition)
For the adapted sequences , where is a sequence of non-decreasing -algebras, there exists an integer such that for some , where is defined by
with being the conditional mathematical expectation operator.
Remark 4.3
Assumption 4.2 is also used to guarantee the stability and performance of the distributed LMS algorithm (see e.g., [16, 23]). We give some intuitive explanations for the above cooperative excitation condition about the following two aspects.
(1) “Why excitation”. Let us consider an extreme case where all regression vectors are equal to zero, then Assumption 4.2 can not be satisfied. Moreover, from (1), we see that the unknown parameter can not be estimated or tracked since the observations do not contain any information about the unknown parameter . In order to estimate , some nonzero information condition (named excitation condition) should be imposed on the regression vectors . In fact, Assumption 4.2 intuitively gives a lower bound (which may be changed over time) of the sequence . For example, if there exists a constant such that , then by Remark 4.1, we know that Assumption 4.2 can be satisfied.
(2) “Why cooperative”. Compared with the excitation condition for FFLS algorithm of single sensor case in [34], i.e., there exists a constant such that
(19) |
for some where
Assumption 4.2 contains not only temporal union information but also spatial union information of all the sensors, which means that Assumption 4.2 is much weaker than the condition (19) since when . Besides, we also note that Assumption 4.2 can be reduced to the condition (19) when . In fact, Assumption 4.2 can reflect the cooperative effect of multiple sensors in the sense that the estimation task can be still fulfilled by the cooperation of multiple sensors even if any of them cannot.
4.3 Main results
In order to establish exponential stability of the product of random matrices , we first analyze the properties of the random matrix to obtain its upper bound.
Lemma 4.1
Proof 4.1.
Note that is the -th row, -th column element of the matrix , where . By (10), we have . Hence by the inequality
(21) |
with , we obtain for any , and any ,
(22) |
Denote . Then by (10), (13), (21) and (22), we have for ,
(23) |
By Lemma 2.1 and (23), it follows that
(24) |
Then by (24), we have
Hence combining this with the inequality where and , we obtain that
(25) |
By Remark 4.2, we know that holds for all . Thus, by (25), we have for
(26) |
Summing up both sides of (26) from to , by the definition of , we have
This completes the proof of the lemma.
Before giving the boundness of the random matrix , we first introduce two lemmas in [34].
Lemma 4.2.
[34] Let , and , where is a positive constant. Then for any , .
Lemma 4.3.
[34] Let be an adapted process, and
where and are two adapted nonnegative process with properties:
where and are constants. Then we have
The following lemma proves the boundedness of the random matrix sequence .
Lemma 4.4.
Proof 4.5.
For any , there exists an integer such that
(27) |
By the definition of in Lemma 4.1, it is clear that
(28) |
Hence by Lemma 4.1 and (28), we obtain
(29) |
By the inequality used in (22) it follows that
Hence by (29), we have
(30) |
For , denote
(31) |
where denotes the indicator function, whose value is 1 if its argument (a formula) is true, and 0, otherwise. Then by (29) and (30), we have
(32) | |||||
Denote
By the inequality
and , from the definition of in (28), we can conclude the following inequality,
(33) |
Hence by the definition of in (31),
(34) |
Denote
Then by (32) and (34), we have
(35) |
Since and , we know that with being a positive constant. Denote , then by the definition of , it is clear that . Thus, we obtain that . Similar to the analysis of , we have
(36) |
Hence by the definition of , it follows that
(37) |
By Assumption 4.2 and the fact , applying Lemma 4.2, we obtain . By (37), we see that there exists a positive constant such that
where . Furthermore, by Lemma 4.3, we have , which implies that . This completes the proof.
We then establish the exponential stability of the homogeneous part of the error equation (17).
Theorem 4.6.
Proof 4.7.
Based on Theorem 4.6, we further establish the tracking error bound of Algorithm 2 under some conditions on the noises and parameter variation.
Theorem 4.8.
Proof 4.9.
For convenience of analysis, let the state transition matrix be recursively defined by
(39) |
It is clear that . From the definition of and (10), we have . Then by (17), we have
Hence by Hölder inequality, we have
Hence by Lemma 4.4 and (38), it follows that
where is a positive constant depending on and the upper bounds of , and . This completes the proof.
5 Stability of distributed FFLS algorithm over unreliable directed networks
In Section IV, we have studied the stability of the distributed FFLS algorithm under the fixed undirected graph. However, in practical engineering applications, the information exchange between sensors might not be bidirectional. Moreover, it is often interfered by many uncertain random factors due to the distance, obstacle and interference, which will lead to the interruption or reconstruction of communication links. Thus, in this section, we model the communication links between sensors as time-varying random switching directed communication topologies . The switching process is governed by a homogeneous Markov chain whose states belong to a finite set , and the corresponding set of communication topology graph is denoted by . The communication graph is switched just at the instant that the value of is changed. Thus, the corresponding adjacency matrix and the neighbor set of the sensor are denoted as and , respectively. For the distributed FFLS algorithm over the Markovian switching directed topologies, we just modify Step 2 in Algorithm 2 as follows:
(40) | ||||
(41) |
To analyze the stability of algorithm (11), (12), (40), (41), we introduce the following assumptions:
Assumption 5.1
All possible digraphs are balanced and the union of all those digraphs is strongly connected.
Assumption 5.2
The Markov chain is irreducible and aperiodic with the transition probability matrix where with being the conditional probability.
According to Markov chain theory (c.f., [37]), a discrete-time homogeneous Markov chain with finite states is ergodic if and only if it is irreducible and aperiodic. Hence Assumption 5.2 means that the -step transition matrix has a limit with identical rows.
In the following, we will analyze the properties of the strongly connected directed graph. For convenience, we denote the -th row, -th column element of the matrix as .
Lemma 5.1.
Let be strongly connected graph with . Then is a positive matrix, i.e., every element of the matrix is positive.
Proof 5.2.
We just prove that the graph corresponding to the matrix is a complete graph. Denote the child node set of the node in graph as . The corresponding child node set of the node in graph is denoted by . For any and , we have
(42) |
Since is strongly connected, if , then there exists two nodes and such that , hence
(43) |
By (42) and (43), it is clear that . Hence for any , we have
(44) |
Since is strongly connected, if , then there exists two nodes and such that , hence
(45) |
By (44) and (45), we can see that . We repeat the above process until . The lemma can be proved by the arbitrariness of the node .
Compared with the undirected graph case, the key difference is that the adjacency matrix in this section is an asymmetric and random matrix. Hence we need to deal with the coupled relationship between random adjacency matrices and random regression vectors. By using the above lemma and Markov chain theory, we establish the stability of the algorithm (11), (12), (40), (41) under Markovian switching topology.
Theorem 5.3.
Proof 5.4.
Following the proof line of Theorem 4.8 in Subsection 4.3, it can be seen that we need to prove equation (33) holds under the assumptions of the theorem. By Assumption 5.2, there exists a positive integer such that
(46) |
holds for all and all states . Denote . Then the -th row, -th column element of the matrix is denoted by . Following Lemmas 4.1 and 4.4, we may abuse some notations , and
In the following we analyze the term . By (46), we can see that there exists a positive constant such that for all ,
(47) |
with being a -algebra. By (47), we know that the Markov chain can visit all states in with times in a positive probability during the time interval . Hence for , by Assumption 5.1 and Lemma 5.1, there exists a positive constant such that the following inequality holds,
By and , we conclude that
(48) |
From the above analysis, we can obtain the following inequality
The rest part of the proof can be obtained by following the proofs of Lemma 4.4, Theorems 4.6 and 4.8 just replacing the notation with . This completes the proof of Theorem 5.3.
6 Concluding Remarks
This paper proposed a distributed FFLS algorithm to collaboratively track an unknown time-varying parameter by minimizing a local loss function with a forgetting factor. By introducing a spatio-temporal cooperative excitation condition, we established the stability of the proposed distributed FFLS algorithm for fixed undirected graph case. Then, the theoretical results were generalized to the case of Markovian switching directed graphs. The cooperative excitation condition revealed that the sensors can collaboratively accomplish the tracking task even though any individual sensor cannot. We note that our theoretical results are established without using independence or stationarity conditions of the regression vectors. Thus, a relevant research topic is how to combine the distributed adaptive estimation with the distributed control. How to establish the stability analysis of the distributed algorithms for more complex cases such as considering quantization effect or time-delay in communication channels is another interesting research topic.
References
- [1] W. Ren and R. Beard, “Consensus seeking in multiagent systems under dynamically changing interaction topologies,” IEEE Transactions on Automatic Control, vol. 50, no. 5, pp. 655–661, 2005.
- [2] Y. Wang, L. Cheng, W. Ren, Z.-G. Hou, and M. Tan, “Seeking consensus in networks of linear agents: Communication noises and markovian switching topologies,” IEEE Transactions on Automatic Control, vol. 60, no. 5, pp. 1374–1379, 2015.
- [3] K. Lu, H. Xu, and Y. Zheng, “Distributed resource allocation via multi-agent systems under time-varying networks,” Automatica, vol. 136, p. 110059, 2022.
- [4] B. Wang, Q. Fei, and Q. Wu, “Distributed time-varying resource allocation optimization based on finite-time consensus approach,” IEEE Control Systems Letters, vol. 5, no. 2, pp. 599–604, 2021.
- [5] Z. Lin, L. Wang, Z. Han, and M. Fu, “Distributed formation control of multi-agent systems using complex laplacian,” IEEE Transactions on Automatic Control, vol. 59, no. 7, pp. 1765–1777, 2014.
- [6] Y. Zhi, L. Liu, B. Guan, B. Wang, Z. Cheng, and H. Fan, “Distributed robust adaptive formation control of fixed-wing uavs with unknown uncertainties and disturbances,” Aerospace Science and Technology, vol. 126, p. 107600, 2022.
- [7] G. Battistelli and L. Chisci, “Kullback-Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability,” Automatica, vol. 50, no. 3, pp. 707–718, 2014.
- [8] W. Chen, C. Wen, S. Hua, and C. Sun, “Distributed cooperative adaptive identification and control for a group of continuous-time systems with a cooperative pe condition via consensus,” IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 91–106, 2014.
- [9] M. U. Javed, J. I. Poveda, and X. Chen, “Excitation conditions for uniform exponential stability of the cooperative gradient algorithm over weakly connected digraphs,” IEEE Control Systems Letters, vol. 6, pp. 67–72, 2022.
- [10] F. Barani, A. Savadi, and H. S. Yazdi, “Convergence behavior of diffusion stochastic gradient descent algorithm,” Signal Processing, vol. 183, p. 108014, 2021.
- [11] I. D. Schizas, G. Mateos, and G. B. Giannakis, “Distributed LMS for consensus-based in-network adaptive processing,” IEEE Transactions on Signal Processing, vol. 57, no. 6, pp. 2365–2382, 2009.
- [12] L. Zhang, Y. Cai, C. Li, and R. C. de Lamare, “Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks,” EURASIP Journal on Advances in Signal Processing, vol. 57, 2017, doi:10.1186/s13634-017-0490-z.
- [13] N. Takahashi, I. Yamada, and A. H. Sayed, “Diffusion least-mean squares with adaptive combiners: Formulation and performance analysis,” IEEE Transactions on Signal Processing, vol. 58, no. 9, pp. 4795–4810, 2010.
- [14] J. Lei and H. Chen, “Distributed estimation for parameter in heterogeneous linear time-varying models with observations at network sensors,” Communications in Information and Systems, vol. 15, no. 4, pp. 423–451, 2015.
- [15] G. Mateos and G. B. Giannakis, “Distributed recursive least-squares: Stability and performance analysis,” IEEE Transactions on Signal Processing, vol. 60, no. 7, pp. 3740–3754, 2012.
- [16] S. Xie and L. Guo, “Analysis of normalized least mean squares-based consensus adaptive filters under a general information condition,” SIAM Journal on Control and Optimization, vol. 56, no. 5, pp. 3404–3431, 2018.
- [17] D. Gan and Z. Liu, “Performance analysis of the compressed distributed least squares algorithm,” Systems & Control Letters, vol. 164, p. 105228, 2022.
- [18] ——, “Distributed order estimation of arx model under cooperative excitation condition,” SIAM Journal on Control and Optimization, vol. 60, no. 3, pp. 1519–1545, 2022.
- [19] D. Gan, S. Xie, and Z. Liu, “Stability of the distributed Kalman filter using general random coefficients,” Science China Information Sciences, vol. 64, pp. 172 204:1–172 204:14, 2021.
- [20] L. Guo, “Estimation, control, and games of dynamical systems with uncertainty,” SCIENTIA SINICA Informationis, vol. 50, no. 9, pp. 1327–1344, 2020.
- [21] D. Gan and Z. Liu, “Convergence of the distributed SG algorithm under cooperative excitation condition,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–15, 2022, doi=10.1109/TNNLS.2022.3213715.
- [22] S. Xie, Y. Zhang, and L. Guo, “Convergence of a distributed least squares,” IEEE Transactions on Automatic Control, vol. 66, no. 10, pp. 4952–4959, 2021.
- [23] S. Xie and L. Guo, “Analysis of distributed adaptive filters based on diffusion strategies over sensor networks,” IEEE Transactions on Automatic Control, vol. 63, no. 11, pp. 3643–3658, 2018.
- [24] O. Macchi and E. Eweda, “Compared speed and accuracy of RLS and LMS algorithms with constant forgetting factors,” Traitement Signal, vol. 22, pp. 255–267, 1988.
- [25] Y. Hatano and M. Mesbahi, “Agreement over random networks,” IEEE Transactions on Automatic Control, vol. 50, no. 11, pp. 1867–1872, 2005.
- [26] S. Kar, J. M. F. Moura, and K. Ramanan, “Distributed parameter estimation in sensor networks: Nonlinear observation models and imperfect communication,” IEEE Transactions on Information Theory, vol. 58, no. 6, pp. 3575–3605, 2012.
- [27] I. Matei, N. Martins, and J. S. Baras, “Almost sure convergence to consensus in markovian random graphs,” in Proceedings of the 47th IEEE Conference on Decision and Control, Cancun, Mexico, December 2008, pp. 3535–3540.
- [28] K. You, Z. Li, and L. Xie, “Consensus condition for linear multi-agent systems over randomly switching topologies,” Automatica, vol. 49, no. 10, pp. 3125–3132, 2013.
- [29] Y. Wang, L. Cheng, W. Ren, Z.-G. Hou, and M. Tan, “Seeking consensus in networks of linear agents: Communication noises and markovian switching topologies,” IEEE Transactions on Automatic Control, vol. 60, no. 5, pp. 1374–1379, 2015.
- [30] M. Meng, L. Liu, and G. Feng, “Adaptive output regulation of heterogeneous multiagent systems under markovian switching topologies,” IEEE Transactions on Cybernetics, vol. 48, no. 10, pp. 2962–2971, 2018.
- [31] Q. Zhang and J.-F. Zhang, “Distributed parameter estimation over unreliable networks with markovian switching topologies,” IEEE Transactions on Automatic Control, vol. 57, no. 10, pp. 2545–2560, 2012.
- [32] Q. Liu, Z. Wang, X. He, and D. Zhou, “Event-based distributed filtering over markovian switching topologies,” IEEE Transactions on Automatic Control, vol. 64, no. 4, pp. 1595–1602, 2019.
- [33] G. Zielke, “Inversion of modified symmetric matrices,” Journal of the Association for Computing Machinery, vol. 15, no. 3, pp. 402–408, 1968.
- [34] L. Guo, “Stability of recursive stochastic tracking algorithms,” SIAM Journal on Control and Optimization, vol. 32, no. 5, pp. 1195–1225, 1994.
- [35] ——, Time-varying stochastic systems, stability and adaptive theory, Second edition. Science Press, Beijing, 2020.
- [36] C. Godsil and G. Royle, Algebraic Graph Theory. Spring-Verlag, 2001.
- [37] S. Karlin and H. Taylor, A Second Course in Stochastic Processes. New York: Academic, 1981.