This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Networks obtained by Implicit-Explicit Method:
Discrete-time distributed median solver

Jin Gyu Lee This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(Ministry of Science and ICT) (No. 2019R1A6A3A12032482).J. G. Lee is with Control Group, Department of Engineering, University of Cambridge, United Kingdom. jgl46@cam.ac.uk
Abstract

In the purpose of making the consensus algorithm robust to outliers, consensus on the median value has recently attracted some attention. It has its applicability in for instance constructing a resilient distributed state estimator. Meanwhile, most of the existing works consider continuous-time algorithms and uses high-gain and discontinuous vector fields. This issues a problem of the need for smaller time steps and yielding chattering when discretizing by explicit method for its practical use. Thus, in this paper, we highlight that these issues vanish when we utilize instead Implicit-Explicit Method, for a broader class of networks designed by the blended dynamics approach. In particular, for undirected and connected graphs, we propose a discrete-time distributed median solver that does not suffer from chattering. We also verify by simulation that it has a smaller iteration number required to arrive at a steady-state.

Index Terms:
consensus protocols, resilient consensus, multi-agent systems, blended dynamics

I Introduction

Consensus problem; a design problem for a network that yields agreement on their states has attracted much attention during the decades [1]. Such popularity comes from its various uses in, for instance, mobile multi-robot systems and sensor networks for the purpose of coordination and estimation respectively. In the meantime, the consensus value obtained by such couplings were usually designed or obtained as (weighted) averages of initial values or external inputs.

Meanwhile, considering its application, a large network of cheap robots or sensors, it is hard to assume the reliability of individuals, and such large-scale distributed algorithms should be resilient to faults, outliers, and malicious attacks. In this respect, consensus on the average is inappropriate, as the mean statistic is weak to these abnormalities. On the contrary, what is robust to outliers, is the median statistic.

By this observation, such a problem to design a network to achieve consensus to the median of initial values or external inputs has been recently tackled [2, 3, 4, 5, 6]. Consensus on the median is still useful in the same manner illustrated earlier. In particular, [5] introduces its application to distributed estimation under malicious attacks.

However, most of the existing works deal with continuous-time algorithms and uses high-gain and discontinuous vector fields. Therefore, to adjust it to implement in a discrete-time framework, the usual explicit method yields some trouble. In particular, for stiff dynamics (high-gain), the explicit method requires a smaller time step (that depends on the gain). But, most importantly, the used discontinuous dynamics yields chattering, which arises from the theoretical use of Filippov solution [7] in continuous-time. This means a vector field can take any value in the interval, but in the end, it takes a particular value that ensures the existence of a solution. Since such a particular value is usually hard to find, implementing this in a discrete-time framework requires an alternative. Otherwise, it requires a sufficiently small time step. One way of resolving these issues is to use the implicit method.

Thus, in this paper, we will illustrate that the networks constructed by the blended dynamics approach (approach using strong diffusive coupling), such as the distributed median solver given in [5], can be successfully discretized by the Implicit-Explicit Method as in [8]. In particular, the obtained network does not suffer from an excess of parameters to tune; it does not have to choose an appropriate time step at each time we choose the coupling gain, unlike the explicit method. We will concentrate on the network introduced in [5], as this contains discontinuity only in the individual vector field, hence makes the application of the Implicit-Explicit Method easier. But, this is also to illustrate that this method applies well to the class of networks designed by the well-developed blended dynamics approach [9].

One exception in the previous works that has introduced discrete-time algorithm is [6]. However, they have another layer of network to perform the task. The network proposed in this paper has a smaller dimension compared to this network but instead achieves only approximate consensus.

This paper is organized as follows. In Section II, we briefly introduce the blended dynamics approach and illustrate how such a class of networks can be discretized by the Implicit-Explicit Method. Then, in Section III, we propose our discrete-time distributed median solver following the given outline and then prove its convergence. Section IV verifies by simulation, its ability to remove chattering, and then we conclude in Section V.

II Discretization of networks under strong diffusive coupling

Recently developed blended dynamics approach [9] is based on the observation that a strong diffusive coupling makes heterogeneous multi-agent systems behave like a single dynamical system which has its vector field as the average of all the individual vector fields in the network [10, 11]. In particular, consider a network given as

x˙i=fi(xi)+kj𝒩iαij(xjxi),i𝒩,\dot{x}_{i}=f_{i}(x_{i})+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}(x_{j}-x_{i}),\quad i\in\mathcal{N},

where 𝒩:={1,,N}\mathcal{N}:=\{1,\dots,N\} is the set of agent indices with the number of agents NN, and 𝒩i\mathcal{N}_{i} is a subset of 𝒩\mathcal{N} whose elements are the indices of the agents that send the information to agent ii. Here, the coefficient αij\alpha_{ij} is the ijijth element of the adjacency matrix that represents the interconnection graph, and we also assume hereafter that the graph is undirected and connected. Then, as the coupling gain kk approaches infinity, the network achieves arbitrary precision approximate synchronization and its synchronized behavior can be characterized by the single dynamics

x^˙=1Ni=1Nfi(x^)\dot{\hat{x}}=\frac{1}{N}\sum_{i=1}^{N}f_{i}(\hat{x})

which is called blended dynamics, under the only assumption that the blended dynamics is stable, e.g., it has contraction property or has an asymptotically stable limit cycle. The entire theory is based on the singular perturbation argument, and it has wide applicability in network design such as distributed optimization, distributed estimation, and formation control. We refer to [12] for an exhaustive review on the topic.

Now, to avoid the problem of using the explicit method in discretizing stiff dynamics, which in this case arises as to the use of smaller time steps for increasing coupling gain kk (hence making it harder to perform decentralized design),111In particular, even to ensure stability in the obtained network by the explicit method, the time step has to be well-selected to be sufficiently small. This problem does not arise in the following approach, hence leaves the coupling gain kk to be the only global parameter that we have to tune. we propose in this section the following, which makes use of the Implicit-Explicit Method. In particular, we discretize in the manner given as

xi[n+1]xi[n]\displaystyle x_{i}[n+1]-x_{i}[n] =fi(xi[n+1])\displaystyle=f_{i}(x_{i}[n+1])
+kj𝒩iαij(xj[n]xi[n+1])\displaystyle\quad+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}(x_{j}[n]-x_{i}[n+1])

or equivalently as

xi[n+1]+kj𝒩iαijxi[n+1]fi(xi[n+1])\displaystyle x_{i}[n+1]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{i}[n+1]-f_{i}(x_{i}[n+1])
=xi[n]+kj𝒩iαijxj[n].\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad=x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n].

Now, this yields the network of form

xi[n+1]\displaystyle x_{i}[n+1] =Fik(xi[n]+kj𝒩iαijxj[n]),i𝒩\displaystyle=F_{i}^{k}\left(x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n]\right),\quad i\in\mathcal{N}

where Fik()F_{i}^{k}(\cdot) is the inverse of (1+kdi)xfi(x)(1+kd_{i})x-f_{i}(x), where di=j𝒩iαijd_{i}=\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}. By the implicit function theorem, such function FikF_{i}^{k} is well-defined semi-globally for sufficiently large kk. By its definition, it satisfies the following identity.

Fik((1+kdi)xfi(x))\displaystyle F_{i}^{k}((1+kd_{i})x-f_{i}(x)) x\displaystyle\equiv x

The utility of this approach will be found in the particular example of distributed median solver, in Section III.

III Discrete-time distributed median solver

The continuous-time distributed median solver that we want to discretize in the manner illustrated in Section II is motivated by the blended dynamics approach and is given in [5] as

x˙i=sgn(oixi)+kj𝒩iαij(xjxi)\displaystyle\dot{x}_{i}=\text{sgn}(o_{i}-x_{i})+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}(x_{j}-x_{i}) (1)

where by increasing kk sufficiently large, the network gets approximately synchronized to the median value of a collection 𝒪\mathcal{O} of real numbers oio_{i}, i=1,,Ni=1,\dots,N. The function sgn:\text{sgn}:\mathbb{R}\to\mathbb{R} denotes the signum function defined as sgn(s)=s/|s|\text{sgn}(s)=s/|s| for non-zero ss, and sgn(s)=0\text{sgn}(s)=0 for s=0s=0.

In this paper, the median is defined as a real number that belongs to the set

𝒪:={{o(N+1)/2s}, if N is odd[oN/2s,oN/2+1s], if N is even\mathcal{M}_{\mathcal{O}}:=\begin{cases}\{o_{(N+1)/2}^{s}\},&\mbox{ if $N$ is odd}\\ [o_{N/2}^{s},o_{N/2+1}^{s}],&\mbox{ if $N$ is even}\end{cases}

where oiso_{i}^{s}’s are the elements of 𝒪\mathcal{O} with its index being sorted (rearranged) such that o1so2soNso_{1}^{s}\leq o_{2}^{s}\leq\cdots\leq o_{N}^{s}. With the help of this relaxed definition of the median, finding the median of 𝒪\mathcal{O} becomes solving a simple optimization problem

minimizexi=1N|oix|.\text{minimize}_{x}\sum_{i=1}^{N}|o_{i}-x|.

Then, the gradient descent algorithm given by

x^˙=i=1Nsgn(oix^)\dot{\hat{x}}=\sum_{i=1}^{N}\text{sgn}(o_{i}-\hat{x})

solves the minimization problem; limtx^(t)𝒪=0\lim_{t\to\infty}\|\hat{x}(t)\|_{\mathcal{M}_{\mathcal{O}}}=~{}0.222For a set Ξ\Xi, xΞ\|x\|_{\Xi} denotes the distance between the vector xx and Ξ\Xi, i.e., xΞ:=infyΞxy\|x\|_{\Xi}:=\inf_{y\in\Xi}\|x-y\|. This motivates the network (1) according to the blended dynamics approach introduced in Section II.

In this manner, we discretize the network (1) accordingly as

xi[n+1]=Sik(xi[n]+kj𝒩iαijxj[n]),x_{i}[n+1]=S_{i}^{k}\left(x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n]\right),

where Sik()S_{i}^{k}(\cdot) is a left inverse of (1+kdi)xsgn(oix)(1+kd_{i})x-\text{sgn}(o_{i}-x). Since

(1+kdi)xsgn(oix)\displaystyle(1+kd_{i})x-\text{sgn}(o_{i}-x) ={(1+kdi)x1, if x<oi,(1+kdi)x, if x=oi,(1+kdi)x+1, if x>oi,\displaystyle\!=\!\begin{cases}(1+kd_{i})x-1,&\!\mbox{ if }x<o_{i},\\ (1+kd_{i})x,&\!\mbox{ if }x=o_{i},\\ (1+kd_{i})x+1,&\!\mbox{ if }x>o_{i},\end{cases}

we obtain

Sik(x)\displaystyle S_{i}^{k}(x) ={x+11+kdi, if x<(1+kdi)oi1,x11+kdi, if x>(1+kdi)oi+1,x1+kdi, otherwise.\displaystyle=\begin{cases}\frac{x+1}{1+kd_{i}},&\mbox{ if }x<(1+kd_{i})o_{i}-1,\\ \frac{x-1}{1+kd_{i}},&\mbox{ if }x>(1+kd_{i})o_{i}+1,\\ \frac{x}{1+kd_{i}},&\mbox{ otherwise}.\end{cases}

In particular, the network is simply

xi[n+1]=11+kdi[1+xi[n]+kj𝒩iαijxj[n]]when xi[n]+kj𝒩iαijxj[n]<(1+kdi)oi1=11+kdi[1+xi[n]+kj𝒩iαijxj[n]]when xi[n]+kj𝒩iαijxj[n]>(1+kdi)oi+1=11+kdi[xi[n]+kj𝒩iαijxj[n]]otherwise\displaystyle\begin{split}x_{i}[n+1]&=\frac{1}{1+kd_{i}}\left[1+x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n]\right]\\ &\text{when }x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n]<(1+kd_{i})o_{i}-1\\ &=\frac{1}{1+kd_{i}}\left[-1+x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n]\right]\\ &\text{when }x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n]>(1+kd_{i})o_{i}+1\\ &=\frac{1}{1+kd_{i}}\left[x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n]\right]\\ &\text{otherwise}\end{split} (2)

for i𝒩i\in\mathcal{N}. We have the following convergence result.

Theorem 1

Under the assumption that the communication graph induced by the adjacency element αij\alpha_{ij} is undirected and connected, for any ϵ>0\epsilon>0, there exists k>0k^{*}>0 such that, for each k>kk>k^{*} and initial condition xi[0]x_{i}[0]\in\mathbb{R}, ii\in\mathbb{R}, the solution to (2) exists for all nn\in\mathbb{N}, and satisfies

lim supnxi[n]𝒪ϵ\limsup_{n\to\infty}\left\|x_{i}[n]\right\|_{\mathcal{M}_{\mathcal{O}}}\leq\epsilon

for all i𝒩i\in\mathcal{N}. \square

Proof:

Note first that the network (2) can be written as

X[n+1]=kX[n]+S[n]X[n+1]=\mathcal{B}_{k}X[n]+S[n]

where X[n]=[x1[n]xN[n]]TX[n]=[x_{1}[n]\cdots x_{N}[n]]^{T}, S[n]=[s1[n]sN[n]]TS[n]=[s_{1}[n]\cdots s_{N}[n]]^{T}, (1+kdi)si[n]{1,0,1}(1+kd_{i})s_{i}[n]\in\{-1,0,1\}, and k\mathcal{B}_{k} is a stochastic matrix. Note also that

wk:=[(1+kd1)/i=1N(1+kdi)(1+kdN)/i=1N(1+kdi)]w_{k}:=\begin{bmatrix}(1+kd_{1})/\sum_{i=1}^{N}(1+kd_{i})\\ \vdots\\ (1+kd_{N})/\sum_{i=1}^{N}(1+kd_{i})\end{bmatrix}

is the left eigenvector of k\mathcal{B}_{k} associated with the unique eigenvalue 11. The uniqueness comes from the connectivity of the network. Therefore, we obtain

wkTX[n+1]=wkTX[n]+i=1N(1+kdi)si[n]i=1N(1+kdi)w_{k}^{T}X[n+1]=w_{k}^{T}X[n]+\frac{\sum_{i=1}^{N}(1+kd_{i})s_{i}[n]}{\sum_{i=1}^{N}(1+kd_{i})}

and

kn1NwkT<Ckqkn\left\|\mathcal{B}_{k}^{n}-1_{N}w_{k}^{T}\right\|<C_{k}q_{k}^{n}

with some Ck>0C_{k}>0 and qk(0,1)q_{k}\in(0,1) [13, 14]. In particular, limkCk=maxidi/minidi=:C<\lim_{k\to\infty}C_{k}=\sqrt{\max_{i}d_{i}/\min_{i}d_{i}}=:C_{\infty}<\infty and limkqk=:q(0,1)\lim_{k\to\infty}q_{k}=:q_{\infty}\in(0,1). See Appendix A for its illustration.

Now, since

X[n]\displaystyle X[n] =knX[0]+i=1nkniS[i1]\displaystyle=\mathcal{B}_{k}^{n}X[0]+\sum_{i=1}^{n}\mathcal{B}_{k}^{n-i}S[i-1]

we can conclude that

(IN1NwkT)X[n]\displaystyle(I_{N}-1_{N}w_{k}^{T})X[n] =(kn1NwkT)X[0]\displaystyle=(\mathcal{B}_{k}^{n}-1_{N}w_{k}^{T})X[0]
+i=1n(kni1NwkT)S[i1].\displaystyle\quad+\sum_{i=1}^{n}(\mathcal{B}_{k}^{n-i}-1_{N}w_{k}^{T})S[i-1].

Therefore,

(In1NwkT)X[n]\displaystyle\left\|(I_{n}-1_{N}w_{k}^{T})X[n]\right\|
kn1NwkTX[0]+i=1Nkni1NwkTS[i1]\displaystyle\leq\left\|\mathcal{B}_{k}^{n}-1_{N}w_{k}^{T}\right\|\!\left\|X[0]\right\|\!+\!\sum_{i=1}^{N}\left\|\mathcal{B}_{k}^{n-i}\!-\!1_{N}w_{k}^{T}\right\|\!\left\|S[i-1]\right\|
CkqknX[0]+i=1nCkqkniN1+kminidi\displaystyle\leq C_{k}q_{k}^{n}\left\|X[0]\right\|+\sum_{i=1}^{n}C_{k}q_{k}^{n-i}\frac{\sqrt{N}}{1+k\min_{i}d_{i}}
CkqknX[0]+Ck1qkN1+kminidi.\displaystyle\leq C_{k}q_{k}^{n}\left\|X[0]\right\|+\frac{C_{k}}{1-q_{k}}\frac{\sqrt{N}}{1+k\min_{i}d_{i}}.

This implies that for any ϵ>0\epsilon>0, there exists kk^{*} such that for each kkk\geq k^{*} and initial condition X[0]X[0], there exists nn^{*}\in\mathbb{N} such that

(In1NwkT)X[n]ϵ3\displaystyle\left\|(I_{n}-1_{N}w_{k}^{T})X[n]\right\|\leq\frac{\epsilon}{3} (3)

for all nnn\geq n^{*}. Note that Ck/(1qk)>1C_{k}/(1-q_{k})>1, and thus, we have

ϵ3>11+kdi,i𝒩\frac{\epsilon}{3}>\frac{1}{1+kd_{i}},\quad\forall i\in\mathcal{N}

for such kk.

Now, since we have proved arbitrary precision approximate synchronization, let us recall how the averaged variable wkTX[n]w_{k}^{T}X[n] behaves;

wkTX[n+1]=wkTX[n]+i=1Ns^i[n]i=1N(1+kdi)w_{k}^{T}X[n+1]=w_{k}^{T}X[n]+\frac{\sum_{i=1}^{N}\hat{s}_{i}[n]}{\sum_{i=1}^{N}(1+kd_{i})}

where s^i[n]:=(1+kdi)si[n]{1,0,1}\hat{s}_{i}[n]:=(1+kd_{i})s_{i}[n]\in\{-1,0,1\}. This implies that if the overall balance i=1Ns^i[n]\sum_{i=1}^{N}\hat{s}_{i}[n] is positive, then the averaged value increases, while if the balance is negative, then the value decreases. On the other hand, by its construction s^i[n]=1\hat{s}_{i}[n]=-1 if and only if (see (2))

xi[n]+kj𝒩iαijxj[n]>(1+kdi)oi+1,x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n]>(1+kd_{i})o_{i}+1,

and thus, for nnn\geq n^{*}, if

wkTX[n]oi+2ϵ3,w_{k}^{T}X[n]\geq o_{i}+2\frac{\epsilon}{3},

then by (3), we have

xi[n]+kj𝒩iαijxj[n]\displaystyle x_{i}[n]+k\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}x_{j}[n] (1+kdi)(wkTX[n]ϵ3)\displaystyle\geq(1+kd_{i})\left(w_{k}^{T}X[n]-\frac{\epsilon}{3}\right)
(1+kdi)(oi+ϵ3)\displaystyle\geq(1+kd_{i})\left(o_{i}+\frac{\epsilon}{3}\right)
>(1+kdi)oi+1\displaystyle>(1+kd_{i})o_{i}+1

hence s^i[n]=1\hat{s}_{i}[n]=-1. Similarly, if

wkTX[n]oi2ϵ3,w_{k}^{T}X[n]\leq o_{i}-2\frac{\epsilon}{3},

then we have s^i[n]=1\hat{s}_{i}[n]=1. This finally implies that if

wkTX[n]max𝒪+2ϵ3,w_{k}^{T}X[n]\geq\max\mathcal{M}_{\mathcal{O}}+2\frac{\epsilon}{3},

then we have i=1Ns^i[n]1\sum_{i=1}^{N}\hat{s}_{i}[n]\geq 1, hence the averaged value increases for at least 1/i=1N(1+kdi)1/\sum_{i=1}^{N}(1+kd_{i}) amount, and similarly, if

wkTX[n]min𝒪2ϵ3,w_{k}^{T}X[n]\leq\min\mathcal{M}_{\mathcal{O}}-2\frac{\epsilon}{3},

then we have i=1Ns^i[n]1\sum_{i=1}^{N}\hat{s}_{i}[n]\leq-1, hence the averaged value decreases accordingly. Therefore, we can conclude that

lim supnwkTX[n]𝒪2ϵ3\limsup_{n\to\infty}\left\|w_{k}^{T}X[n]\right\|_{\mathcal{M}_{\mathcal{O}}}\leq 2\frac{\epsilon}{3}

which concludes the proof with the help of (3). ∎

Now, in the next section, we observe in the simulation result, a dramatic removal of the chattering phenomenon, compared to the network obtained by the explicit method.

IV Simulation

To compare the chattering phenomenon in the network, we consider a simple network consisting of three agents, where o1=0o_{1}=0, o2=1o_{2}=1, and o3=100o_{3}=100. The communication graph is complete and unitary; αij=1\alpha_{ij}=1 for all iji\neq j. The simulation result of the network (2) with k=10k=10 is given in Figure 1.

Refer to caption
Figure 1: Simulation result of the network (2) with initial conditions x1[0]=0x_{1}[0]=0, x2[0]=1x_{2}[0]=1, and x3[0]=1.5x_{3}[0]=1.5.

On the other hand, if we simulate a network obtained by discretizing (1) with the explicit method, given as

xi[n+1]=xi[n]+Tssgn(oixi[n])+kTsj𝒩iαij(xj[n]xi[n])\displaystyle\begin{split}x_{i}[n+1]&=x_{i}[n]+T_{s}\text{sgn}(o_{i}-x_{i}[n])\\ &\quad+kT_{s}\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}(x_{j}[n]-x_{i}[n])\end{split} (4)

for sufficiently small time step Ts>0T_{s}>0, then with Ts=0.05T_{s}=0.05, we get the trajectories illustrated in Figure 2. Here, by slightly increasing the time step to Ts=0.07T_{s}=0.07, we get unstable trajectories.

Refer to caption
Figure 2: Simulation result of the network (4) with Ts=0.05T_{s}=0.05 and initial conditions x1[0]=0x_{1}[0]=0, x2[0]=1x_{2}[0]=1, and x3[0]=1.5x_{3}[0]=1.5.

Note that we observe not only a bigger steady-state error, but also the chattering phenomenon. To recover the accuracy in the steady-state limit, we should employ Ts=0.005T_{s}=0.005, which results in the simulation result given in Figure 3, but it requires a larger number of iteration.

Refer to caption
Figure 3: Simulation result of the network (4) with Ts=0.005T_{s}=0.005 and initial conditions x1[0]=0x_{1}[0]=0, x2[0]=1x_{2}[0]=1, and x3[0]=1.5x_{3}[0]=1.5.

V Conclusion

By studying for a particular example of distributed median solver, we have seen the utility of the Implicit-Explicit Method, in discretizing a network constructed by the blended dynamics approach, which for the explicit method, by its stiffness in the dynamics, suffers from a problem like the need for a smaller time step, which depends on the coupling gain. Moreover, the method has proven its ability to remove the chattering phenomenon when discretizing a system having discontinuity in its vector field. The future consideration will be on the general conclusion of the use of the Implicit-Explicit Method on such class of networks and also on the analytical verification of its advantages compared with other methods.

References

  • [1] W. Ren and Y. Cao, Distributed coordination of multi-agent networks: emergent problems, models, and issues.   Springer, 2010.
  • [2] M. Franceschelli, A. Giua, and A. Pisano, “Finite-time consensus on the median value by discontinuous control,” in Proceedings of American Control Conference, 2014, pp. 946–951.
  • [3] ——, “Finite-time consensus on the median value with robustness properties,” IEEE Transactions on Automatic Control, vol. 62, no. 4, pp. 1652–1667, 2017.
  • [4] Z. A. Z. S. Dashti, C. Seatzu, and M. Franceschelli, “Dynamic consensus on the median value in open multi-agent systems,” in Proceedings of 58th IEEE Conference on Decision and Control, 2019, pp. 3691–3697.
  • [5] J. G. Lee, J. Kim, and H. Shim, “Fully distributed resilient state estimation based on distributed median solver,” IEEE Transactions on Automatic Control, vol. 65, no. 9, pp. 3935–3942, 2020.
  • [6] G. Vasiljević, T. Petrović, B. Arbanas, and S. Bogdan, “Dynamic median consensus for marine multi-robot systems using acoustic communication,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5299–5306, 2020.
  • [7] A. F. Filippov, Differential equations with discontinuous righthand sides.   Kluwer Academic Publishers, 1988.
  • [8] X. Wang, J. Zhou, S. Mou, and M. J. Corless, “A distributed algorithm for least squares solutions,” IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 4217–4222, 2019.
  • [9] J. G. Lee and H. Shim, “A tool for analysis and synthesis of heterogeneous multi-agent systems under rank-deficient coupling,” Automatica, vol. 117, p. 108952, 2020.
  • [10] J. Kim, J. Yang, H. Shim, J.-S. Kim, and J. H. Seo, “Robustness of synchronization of heterogeneous agents by strong coupling and a large number of agents,” IEEE Transactions on Automatic Control, vol. 61, no. 10, pp. 3096–3102, 2016.
  • [11] E. Panteley and A. Loría, “Synchronization and dynamic consensus of heterogeneous networked systems,” IEEE Transactions on Automatic Control, vol. 62, no. 8, pp. 3758–3773, 2017.
  • [12] J. G. Lee and H. Shim, “Design of heterogeneous multi-agent system for distributed computation,” In Lecture notes in control and information sciences. Trends in nonlinear and adaptive control - A tribute to Laurent Praly for his 65th birthday. Springer, (Chapter), available at arXiv:2101.00161, 2021.
  • [13] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009.
  • [14] A. Nedic and J. Liu, “A lyapunov approach to discrete-time linear consensus,” in Proceedings of IEEE Global Conference on Signal and Information Processing, 2014, pp. 842–846.

Appendix A Illustration of CkC_{k} and qkq_{k} in the proof of Theorem 1

First define the Laplacian matrix =[lij]N×N\mathcal{L}=[l_{ij}]\in\mathbb{R}^{N\times N} of a graph as :=𝒟𝒜\mathcal{L}:=\mathcal{D}-\mathcal{A}, where 𝒜=[αij]\mathcal{A}=[\alpha_{ij}] is the adjacency matrix of the graph and 𝒟\mathcal{D} is the diagonal matrix whose diagonal entries are did_{i}, i𝒩i\in\mathcal{N}. By its construction, it contains at least one eigenvalue of zero, whose corresponding eigenvector is 1N:=[1,,1]TN1_{N}:=[1,\dots,1]^{T}\in\mathbb{R}^{N}, and all the other eigenvalues have nonnegative real parts. For undirected graphs, the zero eigenvalue is simple if and only if the corresponding graph is connected. Moreover, IN𝒟1I_{N}-\mathcal{D}^{-1}\mathcal{L} has its eigenvalues contained inside the unit circle and the eigenvalue with magnitude one becomes unique if and only if the graph is connected.

Then, we have the representation

INk=diag(k1+kd1,,k1+kdN)=:𝒟k,I_{N}-\mathcal{B}_{k}=\text{diag}\left(\frac{k}{1+kd_{1}},\dots,\frac{k}{1+kd_{N}}\right)\mathcal{L}=:\mathcal{D}_{k}\mathcal{L},

and thus,

IN𝒟k1k𝒟k=𝒟k𝒟k=:k.I_{N}-\sqrt{\mathcal{D}_{k}}^{-1}\mathcal{B}_{k}\sqrt{\mathcal{D}_{k}}=\sqrt{\mathcal{D}_{k}}\mathcal{L}\sqrt{\mathcal{D}_{k}}=:\mathcal{L}_{k}.

Since k\mathcal{L}_{k} is a symmetric positive semi-definite matrix, there exists normalized eigenvectors v1,k,,vN,kv_{1,k},\dots,v_{N,k} associated with eigenvalues 0<λ2,kλN,k0<\lambda_{2,k}\leq\dots\leq\lambda_{N,k} such that

kvi,k=λi,kvi,k and vi,kTk=λi,kvi,kT\mathcal{L}_{k}v_{i,k}=\lambda_{i,k}v_{i,k}\quad\text{ and }\quad v_{i,k}^{T}\mathcal{L}_{k}=\lambda_{i,k}v_{i,k}^{T}

for i=2,,Ni=2,\dots,N and kv1,k=0\mathcal{L}_{k}v_{1,k}=0, v1,kTk=0v_{1,k}^{T}\mathcal{L}_{k}=0. This implies that

𝒟k1k𝒟k=𝒱kdiag(1,1λ2,k,,1λN,k)𝒱kT\sqrt{\mathcal{D}_{k}}^{-1}\mathcal{B}_{k}\sqrt{\mathcal{D}_{k}}=\mathcal{V}_{k}\text{diag}(1,1-\lambda_{2,k},\dots,1-\lambda_{N,k})\mathcal{V}_{k}^{T}

where 𝒱k=[v1,kvN,k]\mathcal{V}_{k}=[v_{1,k}\cdots v_{N,k}]. Therefore, by noting that v1,kv_{1,k} is the normalized vector of 𝒟k11N\sqrt{\mathcal{D}_{k}}^{-1}1_{N}, hence

𝒟k𝒱kdiag(1,0,,0)𝒱kT𝒟k1=1NwkT\sqrt{\mathcal{D}_{k}}\mathcal{V}_{k}\text{diag}(1,0,\dots,0)\mathcal{V}_{k}^{T}\sqrt{\mathcal{D}_{k}}^{-1}=1_{N}w_{k}^{T}

we can conclude that

kn1NwkT=\displaystyle\mathcal{B}_{k}^{n}-1_{N}w_{k}^{T}=
𝒟k𝒱kdiag(0,(1λ2,k)n,,(1λN,k)n)𝒱kT𝒟k1.\displaystyle\sqrt{\mathcal{D}_{k}}\mathcal{V}_{k}\text{diag}\left(0,(1-\lambda_{2,k})^{n},\dots,(1-\lambda_{N,k})^{n}\right)\!\mathcal{V}_{k}^{T}\!\sqrt{\mathcal{D}_{k}}^{-1}.

This finally implies

kn1NwkT\displaystyle\left\|\mathcal{B}_{k}^{n}-1_{N}w_{k}^{T}\right\| Ckqkn\displaystyle\leq C_{k}q_{k}^{n}

where

Ck\displaystyle C_{k} :=𝒟k𝒟k1=1+kmaxidi1+kminidi,\displaystyle:=\left\|\sqrt{\mathcal{D}_{k}}\right\|\left\|\sqrt{\mathcal{D}_{k}}^{-1}\right\|=\sqrt{\frac{1+k\max_{i}d_{i}}{1+k\min_{i}d_{i}}},
qk\displaystyle q_{k} :=max{|1λ2,k|,|1λN,k|}.\displaystyle:=\max\left\{\left|1-\lambda_{2,k}\right|,\left|1-\lambda_{N,k}\right|\right\}.

Now, noting that k\mathcal{L}_{k} has the same set of eigenvalues with the matrix 𝒟k\mathcal{D}_{k}\mathcal{L}, by the Gershgorin circle theorem, we can certify that λi,k[0,2)\lambda_{i,k}\in[0,2), hence qk<1q_{k}<1. In particular, we have

limkCk\displaystyle\lim_{k\to\infty}C_{k} =maxidiminidi\displaystyle=\sqrt{\frac{\max_{i}d_{i}}{\min_{i}d_{i}}}
limkqk\displaystyle\lim_{k\to\infty}q_{k} =max{|1λ2,|,|1λN,|}\displaystyle=\max\left\{\left|1-\lambda_{2,\infty}\right|,\left|1-\lambda_{N,\infty}\right|\right\}

where 0<λ2,λN,<20<\lambda_{2,\infty}\leq\cdots\leq\lambda_{N,\infty}<2 are the eigenvalues of

diag(1/d1,,1/dN)=𝒟=𝒟1.\text{diag}(1/d_{1},\dots,1/d_{N})\mathcal{L}=\mathcal{D}_{\infty}\mathcal{L}=\mathcal{D}^{-1}\mathcal{L}.